Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Issue
Volume 16, September
Previous Issue
Volume 16, July
 
 

Future Internet, Volume 16, Issue 8 (August 2024) – 45 articles

Cover Story (view full-size image): This paper introduces an innovative approach to developing a head-mounted fault display system that integrates predictive capabilities, including deep-learning long short-term memory neural networks model integration with anomaly explanations for efficient predictive maintenance tasks. Then, a 3D virtual model created from sampled and recorded data coupled with the deep neural diagnoser model is designed. By applying this methodology to a wind farm dataset provided by Energias De Portugal, we aim to support maintenance managers in making informed decisions about inspection, replacement, and repair tasks. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 2607 KiB  
Article
A Method to Optimize Deployment of Directional Sensors for Coverage Enhancement in the Sensing Layer of IoT
by Peng Wang and Yonghua Xiong
Future Internet 2024, 16(8), 302; https://doi.org/10.3390/fi16080302 - 22 Aug 2024
Viewed by 443
Abstract
Directional sensor networks are a widely used architecture in the sensing layer of the Internet of Things (IoT), which has excellent data collection and transmission capabilities. The coverage hole caused by random deployment of sensors is the main factor restricting the quality of [...] Read more.
Directional sensor networks are a widely used architecture in the sensing layer of the Internet of Things (IoT), which has excellent data collection and transmission capabilities. The coverage hole caused by random deployment of sensors is the main factor restricting the quality of data collection in the IoT sensing layer. Determining how to enhance coverage performance by repairing coverage holes is a very challenging task. To this end, we propose a node deployment optimization method to enhance the coverage performance of the IoT sensing layer. Firstly, with the goal of maximizing the effective coverage area, an improved particle swarm optimization (IPSO) algorithm is used to solve and obtain the optimal set of sensing directions. Secondly, we propose a repair path search method based on the improved sparrow search algorithm (ISSA), using the minimum exposure path (MEP) found as the repair path. Finally, a node scheduling algorithm is designed based on MEP to determine the optimal deployment location of mobile nodes and achieve coverage enhancement. The simulation results show that compared with existing algorithms, the proposed node deployment optimization method can significantly improve the coverage rate of the IoT sensing layer and reduce energy consumption during the redeployment process. Full article
Show Figures

Figure 1

27 pages, 5023 KiB  
Article
Beat the Heat: Syscall Attack Detection via Thermal Side Channel
by Teodora Vasilas, Claudiu Bacila and Remus Brad
Future Internet 2024, 16(8), 301; https://doi.org/10.3390/fi16080301 - 21 Aug 2024
Viewed by 631 | Correction
Abstract
As the complexity and integration of electronic devices increase, understanding and mitigating side-channel vulnerabilities will remain a critical area of cybersecurity research. The new and intriguing software-based thermal side-channel attacks and countermeasures use thermal emissions from a device to extract or defend sensitive [...] Read more.
As the complexity and integration of electronic devices increase, understanding and mitigating side-channel vulnerabilities will remain a critical area of cybersecurity research. The new and intriguing software-based thermal side-channel attacks and countermeasures use thermal emissions from a device to extract or defend sensitive information, by reading information from the built-in thermal sensors via software. This work extends the Hot-n-Cold anomaly detection technique, applying it in circumstances much closer to the real-world computational environments by detecting irregularities in the Linux command behavior through CPU temperature monitoring. The novelty of this approach lies in the introduction of five types of noise across the CPU, including moving files, performing extended math computations, playing songs, and browsing the web while the attack detector is running. We employed Hot-n-Cold to monitor core temperatures on three types of CPUs utilizing two commonly used Linux terminal commands, ls and chmod. The results show a high correlation, approaching 0.96, between the original Linux command and a crafted command, augmented with vulnerable system calls. Additionally, a Machine Learning algorithm was used to classify whether a thermal trace is augmented or not, with an accuracy of up to 88%. This research demonstrates the potential for detecting attacks through thermal sensors even when there are different types of noise in the CPU, simulating a real-world scenario. Full article
(This article belongs to the Special Issue Cyber Security in the New "Edge Computing + IoT" World)
Show Figures

Figure 1

22 pages, 1871 KiB  
Article
Wireless and Fiber-Based Post-Quantum-Cryptography-Secured IPsec Tunnel
by Daniel Christian Lawo, Rana Abu Bakar, Abraham Cano Aguilera, Filippo Cugini, José Luis Imaña, Idelfonso Tafur Monroy and Juan Jose Vegas Olmos
Future Internet 2024, 16(8), 300; https://doi.org/10.3390/fi16080300 - 21 Aug 2024
Viewed by 1281
Abstract
In the near future, commercially accessible quantum computers are anticipated to revolutionize the world as we know it. These advanced machines are predicted to render traditional cryptographic security measures, deeply ingrained in contemporary communication, obsolete. While symmetric cryptography methods like AES can withstand [...] Read more.
In the near future, commercially accessible quantum computers are anticipated to revolutionize the world as we know it. These advanced machines are predicted to render traditional cryptographic security measures, deeply ingrained in contemporary communication, obsolete. While symmetric cryptography methods like AES can withstand quantum assaults if key sizes are doubled compared to current standards, asymmetric cryptographic techniques, such as RSA, are vulnerable to compromise. Consequently, there is a pressing need to transition towards post-quantum cryptography (PQC) principles in order to safeguard our privacy effectively. A challenge is to include PQC into existing protocols and thus into the existing communication structure. In this work, we report on the first experimental IPsec tunnel secured by the PQC algorithms Falcon, Dilithium, and Kyber. We deploy our IPsec tunnel in two scenarios. The first scenario represents a high-performance data center environment where many machines are interconnected via high-speed networks. We achieve an IPsec tunnel with an AES-256 GCM encrypted east–west throughput of 100 Gbit/s line rate. The second scenario shows an IPsec tunnel between a wireless NVIDIA Jetson and the cloud that achieves a 0.486 Gbit/s AES-256 GCM encrypted north–south throughput. This case represents a mobile device that communicates securely with applications running in the cloud. Full article
(This article belongs to the Special Issue Edge Intelligence: Edge Computing for 5G and the Internet of Things)
Show Figures

Figure 1

18 pages, 7423 KiB  
Article
Leveraging Internet News-Based Data for Rockfall Hazard Susceptibility Assessment on Highways
by Kieu Anh Nguyen, Yi-Jia Jiang, Chiao-Shin Huang, Meng-Hsun Kuo and Walter Chen
Future Internet 2024, 16(8), 299; https://doi.org/10.3390/fi16080299 - 21 Aug 2024
Viewed by 581
Abstract
Over three-quarters of Taiwan’s landmass consists of mountainous slopes with steep gradients, leading to frequent rockfall hazards that obstruct traffic and cause injuries and fatalities. This study used Google Alerts to compile internet news on rockfall incidents along Taiwan’s highway system from April [...] Read more.
Over three-quarters of Taiwan’s landmass consists of mountainous slopes with steep gradients, leading to frequent rockfall hazards that obstruct traffic and cause injuries and fatalities. This study used Google Alerts to compile internet news on rockfall incidents along Taiwan’s highway system from April 2019 to February 2024. The locations of these rockfalls were geolocated using Google Earth and integrated with geographical, topographical, environmental, geological, and socioeconomic variables. Employing machine learning algorithms, particularly the Random Forest algorithm, we analyzed the potential for rockfall hazards along roadside slopes. The model achieved an overall accuracy of 0.8514 on the test dataset, with a sensitivity of 0.8378, correctly identifying 83.8% of rockfall locations. Shapley Additive Explanations (SHAP) analysis highlighted that factors such as slope angle and distance to geologically sensitive areas are pivotal in determining rockfall locations. The study underscores the utility of internet-based data collection in providing comprehensive coverage of Taiwan’s highway system, and enabled the first broad analysis of rockfall hazard susceptibility for the entire highway network. The consistent importance of topographical and geographical features suggests that integrating detailed spatial data could further enhance predictive performance. The combined use of Random Forest and SHAP analyses offers a robust framework for understanding and improving predictive models, aiding in the development of effective strategies for risk management and mitigation in rockfall-prone areas, ultimately contributing to safer and more reliable transportation networks in mountainous regions. Full article
Show Figures

Figure 1

29 pages, 521 KiB  
Review
A Survey on the Use of Large Language Models (LLMs) in Fake News
by Eleftheria Papageorgiou, Christos Chronis, Iraklis Varlamis and Yassine Himeur
Future Internet 2024, 16(8), 298; https://doi.org/10.3390/fi16080298 - 19 Aug 2024
Viewed by 5265
Abstract
The proliferation of fake news and fake profiles on social media platforms poses significant threats to information integrity and societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, and human fact-checking, have been employed to combat disinformation, but these methods often fall [...] Read more.
The proliferation of fake news and fake profiles on social media platforms poses significant threats to information integrity and societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, and human fact-checking, have been employed to combat disinformation, but these methods often fall short in the face of increasingly sophisticated fake content. This review article explores the emerging role of Large Language Models (LLMs) in enhancing the detection of fake news and fake profiles. We provide a comprehensive overview of the nature and spread of disinformation, followed by an examination of existing detection methodologies. The article delves into the capabilities of LLMs in generating both fake news and fake profiles, highlighting their dual role as both a tool for disinformation and a powerful means of detection. We discuss the various applications of LLMs in text classification, fact-checking, verification, and contextual analysis, demonstrating how these models surpass traditional methods in accuracy and efficiency. Additionally, the article covers LLM-based detection of fake profiles through profile attribute analysis, network analysis, and behavior pattern recognition. Through comparative analysis, we showcase the advantages of LLMs over conventional techniques and present case studies that illustrate practical applications. Despite their potential, LLMs face challenges such as computational demands and ethical concerns, which we discuss in more detail. The review concludes with future directions for research and development in LLM-based fake news and fake profile detection, underscoring the importance of continued innovation to safeguard the authenticity of online information. Full article
Show Figures

Figure 1

19 pages, 6601 KiB  
Article
An Innovative Recompression Scheme for VQ Index Tables
by Yijie Lin, Jui-Chuan Liu, Ching-Chun Chang and Chin-Chen Chang
Future Internet 2024, 16(8), 297; https://doi.org/10.3390/fi16080297 - 19 Aug 2024
Viewed by 333
Abstract
As we move into the digital era, the pace of technological advancement is accelerating rapidly. Network traffic often becomes congested during the transmission of large data volumes. To mitigate this, data compression plays a crucial role in minimizing transmitted data. Vector quantization (VQ) [...] Read more.
As we move into the digital era, the pace of technological advancement is accelerating rapidly. Network traffic often becomes congested during the transmission of large data volumes. To mitigate this, data compression plays a crucial role in minimizing transmitted data. Vector quantization (VQ) stands out as a potent compression technique where each image block is encoded independently as an index linked to a codebook, effectively reducing the bit rate. In this paper, we introduce a novel scheme for recompressing VQ indices, enabling lossless restoration of the original indices during decoding without compromising visual quality. Our method not only considers pixel correlations within each image block but also leverages correlations between neighboring blocks, further optimizing the bit rate. The experimental results demonstrated the superior performance of our approach over existing methods. Full article
Show Figures

Figure 1

20 pages, 2570 KiB  
Article
A Microservice-Based Smart Agriculture System to Detect Animal Intrusion at the Edge
by Jinpeng Miao, Dasari Rajasekhar, Shivakant Mishra, Sanjeet Kumar Nayak and Ramanarayan Yadav
Future Internet 2024, 16(8), 296; https://doi.org/10.3390/fi16080296 - 16 Aug 2024
Cited by 1 | Viewed by 581
Abstract
Smart agriculture stands as a promising domain for IoT-enabled technologies, with the potential to elevate crop quality, quantity, and operational efficiency. However, implementing a smart agriculture system encounters challenges such as the high latency and bandwidth consumption linked to cloud computing, Internet disconnections [...] Read more.
Smart agriculture stands as a promising domain for IoT-enabled technologies, with the potential to elevate crop quality, quantity, and operational efficiency. However, implementing a smart agriculture system encounters challenges such as the high latency and bandwidth consumption linked to cloud computing, Internet disconnections in rural locales, and the imperative of cost efficiency for farmers. Addressing these hurdles, this paper advocates a fog-based smart agriculture infrastructure integrating edge computing and LoRa communication. We tackle farmers’ prime concern of animal intrusion by presenting a solution leveraging low-cost PIR sensors, cameras, and computer vision to detect intrusions and predict animal locations using an innovative algorithm. Our system detects intrusions pre-emptively, identifies intruders, forecasts their movements, and promptly alerts farmers. Additionally, we compare our proposed strategy with other approaches and measure their power consumptions, demonstrating significant energy savings afforded by our strategy. Experimental results highlight the effectiveness, energy efficiency, and cost-effectiveness of our system compared to state-of-the-art systems. Full article
Show Figures

Figure 1

17 pages, 1231 KiB  
Article
Dynamic Graph Representation Learning for Passenger Behavior Prediction
by Mingxuan Xie, Tao Zou, Junchen Ye, Bowen Du and Runhe Huang
Future Internet 2024, 16(8), 295; https://doi.org/10.3390/fi16080295 - 15 Aug 2024
Viewed by 532
Abstract
Passenger behavior prediction aims to track passenger travel patterns through historical boarding and alighting data, enabling the analysis of urban station passenger flow and timely risk management. This is crucial for smart city development and public transportation planning. Existing research primarily relies on [...] Read more.
Passenger behavior prediction aims to track passenger travel patterns through historical boarding and alighting data, enabling the analysis of urban station passenger flow and timely risk management. This is crucial for smart city development and public transportation planning. Existing research primarily relies on statistical methods and sequential models to learn from individual historical interactions, which ignores the correlations between passengers and stations. To address these issues, this paper proposes DyGPP, which leverages dynamic graphs to capture the intricate evolution of passenger behavior. First, we formalize passengers and stations as heterogeneous vertices in a dynamic graph, with connections between vertices representing interactions between passengers and stations. Then, we sample the historical interaction sequences for passengers and stations separately. We capture the temporal patterns from individual sequences and correlate the temporal behavior between the two sequences. Finally, we use an MLP-based encoder to learn the temporal patterns in the interactions and generate real-time representations of passengers and stations. Experiments on real-world datasets confirmed that DyGPP outperformed current models in the behavior prediction task, demonstrating the superiority of our model. Full article
(This article belongs to the Special Issue IoT, Edge, and Cloud Computing in Smart Cities)
Show Figures

Figure 1

12 pages, 1382 KiB  
Article
Establishing a Model for the User Acceptance of Cybersecurity Training
by Wesam Fallatah, Joakim Kävrestad and Steven Furnell
Future Internet 2024, 16(8), 294; https://doi.org/10.3390/fi16080294 - 15 Aug 2024
Viewed by 644
Abstract
Cybersecurity is established as fundamental for organisations and individuals engaging with digital technology. A central topic in cybersecurity is user behaviour, which has been shown to be the root cause or enabler in a majority of all cyber incidents with a resultant need [...] Read more.
Cybersecurity is established as fundamental for organisations and individuals engaging with digital technology. A central topic in cybersecurity is user behaviour, which has been shown to be the root cause or enabler in a majority of all cyber incidents with a resultant need to empower users to adopt secure behaviour. Researchers and practitioners agree that a crucial step in empowering users to adopt secure behaviour is training. Subsequently, there are many different methods for cybersecurity training discussed in the scientific literature and that are adopted in practise. However, research suggests that those training efforts are not effective enough, and one commonly mentioned reason is user adoption problems. In essence, users are not engaging with the provided training to the extent needed to benefit from the training as expected. While the perception and adoption of individual training methods are discussed in the scientific literature, cohesive studies on the factors that impact user adoption are few and far between. To that end, this paper focuses on the user acceptance of cybersecurity training using the technology acceptance model as a theory base. Based on 22 included publications, the research provides an overview of the cybersecurity training acceptance factors that have been discussed in the existing scientific literature. The main contributions are a cohesive compilation of existing knowledge about factors that impact the user acceptance of cybersecurity training and the introduction of the CTAM, a cybersecurity training acceptance model which pinpoints four factors—regulatory control, worry, apathy, and trust—that influence users’ intention to adopt cybersecurity training. The results can be used to guide future research as well as to guide practitioners implementing cybersecurity training. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

16 pages, 1056 KiB  
Article
Development of a Novel Open Control System Implementation Method under Industrial IoT
by Lisi Liu, Zijie Xu and Xiaobin Qu
Future Internet 2024, 16(8), 293; https://doi.org/10.3390/fi16080293 - 14 Aug 2024
Viewed by 572
Abstract
The closed architecture of modern control systems impedes them from further development in the environment of the industrial IoT. The open control system is proposed to tackle this issue. Numerous open control prototypes have been proposed, but they do not reach high openness. [...] Read more.
The closed architecture of modern control systems impedes them from further development in the environment of the industrial IoT. The open control system is proposed to tackle this issue. Numerous open control prototypes have been proposed, but they do not reach high openness. According to the definition and criteria of open control systems, this paper suggests that the independence between control tasks and the independence between control tasks and infrastructures are the keys to the open control system under the industrial IoT. Through the control domain’s formal description and control task virtualization to deal with the keys, this paper proposes a new method to implement open control systems under the industrial IoT. Specifically, given the hybrid characteristic of the control domain, a hierarchical semantic formal based on an extended finite state machine and a dependency network model with the time property is designed to describe the control domain. Considering the infrastructure’s heterogeneity in the industrial IoT, a hybrid virtualization approach based on containers and WebAssembly is designed to virtualize control tasks. The proposed open control system implementation method is illustrated by constructing an open computer numerical control demonstration and compared to current open control prototypes. Full article
Show Figures

Figure 1

16 pages, 430 KiB  
Article
Multi-Agent Deep-Q Network-Based Cache Replacement Policy for Content Delivery Networks
by Janith K. Dassanayake, Minxiao Wang, Muhammad Z. Hameed and Ning Yang
Future Internet 2024, 16(8), 292; https://doi.org/10.3390/fi16080292 - 14 Aug 2024
Viewed by 579
Abstract
In today’s digital landscape, content delivery networks (CDNs) play a pivotal role in ensuring rapid and seamless access to online content across the globe. By strategically deploying a network of edge servers in close proximity to users, CDNs optimize the delivery of digital [...] Read more.
In today’s digital landscape, content delivery networks (CDNs) play a pivotal role in ensuring rapid and seamless access to online content across the globe. By strategically deploying a network of edge servers in close proximity to users, CDNs optimize the delivery of digital content. One key mechanism involves caching frequently requested content at these edge servers, which not only alleviates the load on the source CDN server but also enhances the overall user experience. However, the exponential growth in user demands has led to increased network congestion, subsequently reducing the cache hit ratio within CDNs. To address this reduction, this paper presents an innovative approach for efficient cache replacement in a dynamic caching environment while maximizing the cache hit ratio via a cooperative cache replacement policy based on reinforcement learning. This paper presents an innovative approach to enhance the performance of CDNs through an advanced cache replacement policy based on reinforcement learning. The proposed system model depicts a mesh network of CDNs, with edge servers catering to user requests, and a main source CDN server. The cache replacement problem is initially modeled as a Markov decision process, and it is extended to a multi-agent reinforcement learning problem. We propose a cooperative cache replacement algorithm based on a multi-agent deep-Q network (MADQN), where the edge servers cooperatively learn to efficiently replace the cached content to maximize the cache hit ratio. Experimental results are presented to validate the performance of our proposed approach. Notably, our MADQN policy exhibits superior cache hit ratios and lower average delays compared to traditional caching policies. Full article
(This article belongs to the Special Issue Industrial Internet of Things (IIoT): Trends and Technologies)
Show Figures

Figure 1

37 pages, 1164 KiB  
Article
Early Ransomware Detection with Deep Learning Models
by Matan Davidian, Michael Kiperberg and Natalia Vanetik
Future Internet 2024, 16(8), 291; https://doi.org/10.3390/fi16080291 - 11 Aug 2024
Viewed by 971
Abstract
Ransomware is a growing-in-popularity type of malware that restricts access to the victim’s system or data until a ransom is paid. Traditional detection methods rely on analyzing the malware’s content, but these methods are ineffective against unknown or zero-day malware. Therefore, zero-day malware [...] Read more.
Ransomware is a growing-in-popularity type of malware that restricts access to the victim’s system or data until a ransom is paid. Traditional detection methods rely on analyzing the malware’s content, but these methods are ineffective against unknown or zero-day malware. Therefore, zero-day malware detection typically involves observing the malware’s behavior, specifically the sequence of application programming interface (API) calls it makes, such as reading and writing files or enumerating directories. While previous studies have used machine learning (ML) techniques to classify API call sequences, they have only considered the API call name. This paper systematically compares various subsets of API call features, different ML techniques, and context-window sizes to identify the optimal ransomware classifier. Our findings indicate that a context-window size of 7 is ideal, and the most effective ML techniques are CNN and LSTM. Additionally, augmenting the API call name with the operation result significantly enhances the classifier’s precision. Performance analysis suggests that this classifier can be effectively applied in real-time scenarios. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Graphical abstract

22 pages, 2481 KiB  
Review
Blockchain Technology and Its Potential to Benefit Public Services Provision: A Short Survey
by Giorgio Piccardo, Lorenzo Conti and Alessio Martino
Future Internet 2024, 16(8), 290; https://doi.org/10.3390/fi16080290 - 9 Aug 2024
Viewed by 1203
Abstract
In the last few years, blockchain has emerged as a cutting-edge technology whose main advantages are transparency, traceability, immutability, enhanced efficiency, and trust, thanks to its decentralized nature. Although many people still identify blockchain with cryptocurrencies and the financial sector, it has many [...] Read more.
In the last few years, blockchain has emerged as a cutting-edge technology whose main advantages are transparency, traceability, immutability, enhanced efficiency, and trust, thanks to its decentralized nature. Although many people still identify blockchain with cryptocurrencies and the financial sector, it has many prospective applications beyond digital currency that can serve as use cases for which traditional infrastructures have become obsolete. Governments have started exploring its potential application to public services provision, as confirmed by the increasing number of adoption initiatives, projects, and tests. As the current public administration is often perceived as slow, bureaucratic, lacking transparency, and failing to involve citizens in decision-making processes, blockchain can establish itself as a tool that enables a process of disintermediation, which can revolutionize the way in which public services are managed and provided. In this paper, we will provide a survey of the main application areas which are likely to benefit from blockchain implementation, together with examples of practical implementations carried out by both state and local governments. Later, we will discuss the main challenges that may prevent its widespread adoption, such as government expenditure, technological maturity, and lack of public awareness. Finally, we will wrap up by providing indications on future areas of research for blockchain-based technologies. Full article
Show Figures

Figure 1

24 pages, 637 KiB  
Article
Testing Stimulus Equivalence in Transformer-Based Agents
by Alexis Carrillo and Moisés Betancort
Future Internet 2024, 16(8), 289; https://doi.org/10.3390/fi16080289 - 9 Aug 2024
Viewed by 793
Abstract
This study investigates the ability of transformer-based models (TBMs) to form stimulus equivalence (SE) classes. We employ BERT and GPT as TBM agents in SE tasks, evaluating their performance across training structures (linear series, one-to-many and many-to-one) and relation types (select–reject, select-only). Our [...] Read more.
This study investigates the ability of transformer-based models (TBMs) to form stimulus equivalence (SE) classes. We employ BERT and GPT as TBM agents in SE tasks, evaluating their performance across training structures (linear series, one-to-many and many-to-one) and relation types (select–reject, select-only). Our findings demonstrate that both models performed above mastery criterion in the baseline phase across all simulations (n = 12). However, they exhibit limited success in reflexivity, transitivity, and symmetry tests. Notably, both models achieved success only in the linear series structure with select–reject relations, failing in one-to-many and many-to-one structures, and all select-only conditions. These results suggest that TBM may be forming decision rules based on learned discriminations and reject relations, rather than responding according to equivalence class formation. The absence of reject relations appears to influence their responses and the occurrence of hallucinations. This research highlights the potential of SE simulations for: (a) comparative analysis of learning mechanisms, (b) explainability techniques for TBM decision-making, and (c) TBM bench-marking independent of pre-training or fine-tuning. Future investigations can explore upscaling simulations and utilize SE tasks within a reinforcement learning framework. Full article
Show Figures

Figure 1

28 pages, 5606 KiB  
Article
FIVADMI: A Framework for In-Vehicle Anomaly Detection by Monitoring and Isolation
by Khaled Mahbub, Antonio Nehme, Mohammad Patwary, Marc Lacoste and Sylvain Allio
Future Internet 2024, 16(8), 288; https://doi.org/10.3390/fi16080288 - 8 Aug 2024
Viewed by 837
Abstract
Self-driving vehicles have attracted significant attention in the automotive industry that is heavily investing to reach the level of reliability needed from these safety critical systems. Security of in-vehicle communications is mandatory to achieve this goal. Most of the existing research to detect [...] Read more.
Self-driving vehicles have attracted significant attention in the automotive industry that is heavily investing to reach the level of reliability needed from these safety critical systems. Security of in-vehicle communications is mandatory to achieve this goal. Most of the existing research to detect anomalies for in-vehicle communication does not take into account the low processing power of the in-vehicle Network and ECUs (Electronic Control Units). Also, these approaches do not consider system level isolation challenges such as side-channel vulnerabilities, that may arise due to adoption of new technologies in the automotive domain. This paper introduces and discusses the design of a framework to detect anomalies in in-vehicle communications, including side channel attacks. The proposed framework supports real time monitoring of data exchanges among the components of in-vehicle communication network and ensures the isolation of the components in in-vehicle network by deploying them in Trusted Execution Environments (TEEs). The framework is designed based on the AUTOSAR open standard for automotive software architecture and framework. The paper also discusses the implementation and evaluation of the proposed framework. Full article
(This article belongs to the Special Issue IoT, Edge, and Cloud Computing in Smart Cities)
Show Figures

Figure 1

21 pages, 17109 KiB  
Article
Dynamic Fashion Video Synthesis from Static Imagery
by Tasin Islam, Alina Miron, Xiaohui Liu and Yongmin Li
Future Internet 2024, 16(8), 287; https://doi.org/10.3390/fi16080287 - 8 Aug 2024
Viewed by 774
Abstract
Online shopping for clothing has become increasingly popular among many people. However, this trend comes with its own set of challenges. For example, it can be difficult for customers to make informed purchase decisions without trying on the clothes to see how they [...] Read more.
Online shopping for clothing has become increasingly popular among many people. However, this trend comes with its own set of challenges. For example, it can be difficult for customers to make informed purchase decisions without trying on the clothes to see how they move and flow. We address this issue by introducing a new image-to-video generator called FashionFlow to generate fashion videos to show how clothing products move and flow on a person. By utilising a latent diffusion model and various other components, we are able to synthesise a high-fidelity video conditioned by a fashion image. The components include the use of pseudo-3D convolution, VAE, CLIP, frame interpolator and attention to generate a smooth video efficiently while preserving vital characteristics from the conditioning image. The contribution of our work is the creation of a model that can synthesise videos from images. We show how we use a pre-trained VAE decoder to process the latent space and generate a video. We demonstrate the effectiveness of our local and global conditioners, which help preserve the maximum amount of detail from the conditioning image. Our model is unique because it produces spontaneous and believable motion using only one image, while other diffusion models are either text-to-video or image-to-video using pre-recorded pose sequences. Overall, our research demonstrates a successful synthesis of fashion videos featuring models posing from various angles, showcasing the movement of the garment. Our findings hold great promise for improving and enhancing the online fashion industry’s shopping experience. Full article
Show Figures

Graphical abstract

16 pages, 1963 KiB  
Article
Cross-Domain Fake News Detection Using a Prompt-Based Approach
by Jawaher Alghamdi, Yuqing Lin and Suhuai Luo
Future Internet 2024, 16(8), 286; https://doi.org/10.3390/fi16080286 - 8 Aug 2024
Viewed by 892
Abstract
The proliferation of fake news poses a significant challenge in today’s information landscape, spanning diverse domains and topics and undermining traditional detection methods confined to specific domains. In response, there is a growing interest in strategies for detecting cross-domain misinformation. However, traditional machine [...] Read more.
The proliferation of fake news poses a significant challenge in today’s information landscape, spanning diverse domains and topics and undermining traditional detection methods confined to specific domains. In response, there is a growing interest in strategies for detecting cross-domain misinformation. However, traditional machine learning (ML) approaches often struggle with the nuanced contextual understanding required for accurate news classification. To address these challenges, we propose a novel contextualized cross-domain prompt-based zero-shot approach utilizing a pre-trained Generative Pre-trained Transformer (GPT) model for fake news detection (FND). In contrast to conventional fine-tuning methods reliant on extensive labeled datasets, our approach places particular emphasis on refining prompt integration and classification logic within the model’s framework. This refinement enhances the model’s ability to accurately classify fake news across diverse domains. Additionally, the adaptability of our approach allows for customization across diverse tasks by modifying prompt placeholders. Our research significantly advances zero-shot learning by demonstrating the efficacy of prompt-based methodologies in text classification, particularly in scenarios with limited training data. Through extensive experimentation, we illustrate that our method effectively captures domain-specific features and generalizes well to other domains, surpassing existing models in terms of performance. These findings contribute significantly to the ongoing efforts to combat fake news dissemination, particularly in environments with severely limited training data, such as online platforms. Full article
(This article belongs to the Special Issue Embracing Artificial Intelligence (AI) for Network and Service)
Show Figures

Figure 1

19 pages, 1197 KiB  
Review
A Survey on Emerging Blockchain Technology Platforms for Securing the Internet of Things
by Yunus Kareem, Djamel Djenouri and Essam Ghadafi
Future Internet 2024, 16(8), 285; https://doi.org/10.3390/fi16080285 - 8 Aug 2024
Viewed by 1093
Abstract
The adoption of blockchain platforms to bolster the security of Internet of Things (IoT) systems has attracted significant attention in recent years. Currently, there is a lack of comprehensive and systematic survey papers in the literature addressing these platforms. This paper discusses six [...] Read more.
The adoption of blockchain platforms to bolster the security of Internet of Things (IoT) systems has attracted significant attention in recent years. Currently, there is a lack of comprehensive and systematic survey papers in the literature addressing these platforms. This paper discusses six of the most popular emerging blockchain platforms adopted by IoT systems and analyses their usage in state-of-the-art works to solve security problems. The platform was compared in terms of security features and other requirements. Findings from the study reveal that most blockchain components contribute directly or indirectly to IoT security. Blockchain platform components such as cryptography, consensus mechanism, and hashing are common ways that security is achieved in all blockchain platform for IoT. Technologies like Interplanetary File System (IPFS) and Transport Layer Security (TLS) can further enhance data and communication security when used alongside blockchain. To enhance the applicability of blockchain in resource-constrained IoT environments, future research should focus on refining cryptographic algorithms and consensus mechanisms to optimise performance and security. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT III)
Show Figures

Figure 1

25 pages, 3302 KiB  
Article
Multi-Class Intrusion Detection Based on Transformer for IoT Networks Using CIC-IoT-2023 Dataset
by Shu-Ming Tseng, Yan-Qi Wang and Yung-Chung Wang
Future Internet 2024, 16(8), 284; https://doi.org/10.3390/fi16080284 - 8 Aug 2024
Cited by 2 | Viewed by 4545
Abstract
This study uses deep learning methods to explore the Internet of Things (IoT) network intrusion detection method based on the CIC-IoT-2023 dataset. This dataset contains extensive data on real-life IoT environments. Based on this, this study proposes an effective intrusion detection method. Apply [...] Read more.
This study uses deep learning methods to explore the Internet of Things (IoT) network intrusion detection method based on the CIC-IoT-2023 dataset. This dataset contains extensive data on real-life IoT environments. Based on this, this study proposes an effective intrusion detection method. Apply seven deep learning models, including Transformer, to analyze network traffic characteristics and identify abnormal behavior and potential intrusions through binary and multivariate classifications. Compared with other papers, we not only use a Transformer model, but we also consider the model’s performance in the multi-class classification. Although the accuracy of the Transformer model used in the binary classification is lower than that of DNN and CNN + LSTM hybrid models, it achieves better results in the multi-class classification. The accuracy of binary classification of our model is 0.74% higher than that of papers that also use Transformer on TON-IOT. In the multi-class classification, our best-performing model combination is Transformer, which reaches 99.40% accuracy. Its accuracy is 3.8%, 0.65%, and 0.29% higher than the 95.60%, 98.75%, and 99.11% figures recorded in papers using the same dataset, respectively. Full article
(This article belongs to the Special Issue IoT Security: Threat Detection, Analysis and Defense)
Show Figures

Figure 1

25 pages, 3477 KiB  
Article
Overlay and Virtual Private Networks Security Performances Analysis with Open Source Infrastructure Deployment
by Antonio Francesco Gentile, Davide Macrì, Emilio Greco and Peppino Fazio
Future Internet 2024, 16(8), 283; https://doi.org/10.3390/fi16080283 - 7 Aug 2024
Cited by 1 | Viewed by 882
Abstract
Nowadays, some of the most well-deployed infrastructures are Virtual Private Networks (VPNs) and Overlay Networks (ONs). They consist of hardware and software components designed to build private/secure channels, typically over the Internet. They are currently among the most reliable technologies for achieving this [...] Read more.
Nowadays, some of the most well-deployed infrastructures are Virtual Private Networks (VPNs) and Overlay Networks (ONs). They consist of hardware and software components designed to build private/secure channels, typically over the Internet. They are currently among the most reliable technologies for achieving this objective. VPNs are well-established and can be patched to address security vulnerabilities, while overlay networks represent the next-generation solution for secure communication. In this paper, for both VPNs and ONs, we analyze some important network performance components (RTT and bandwidth) while varying the type of overlay networks utilized for interconnecting traffic between two or more hosts (in the same data center, in different data centers in the same building, or over the Internet). These networks establish connections between KVM (Kernel-based Virtual Machine) instances rather than the typical Docker/LXC/Podman containers. The first analysis aims to assess network performance as it is, without any overlay channels. Meanwhile, the second establishes various channels without encryption and the final analysis encapsulates overlay traffic via IPsec (Transport mode), where encrypted channels like VTI are not already available for use. A deep set of traffic simulation campaigns shows the obtained performance. Full article
Show Figures

Figure 1

20 pages, 4390 KiB  
Article
Explainable Artificial Intelligence Approach for Improving Head-Mounted Fault Display Systems
by Abdelaziz Bouzidi, Lala Rajaoarisoa and Luka Claeys
Future Internet 2024, 16(8), 282; https://doi.org/10.3390/fi16080282 - 6 Aug 2024
Viewed by 945
Abstract
To fully harness the potential of wind turbine systems and meet high power demands while maintaining top-notch power quality, wind farm managers run their systems 24 h a day/7 days a week. However, due to the system’s large size and the complex interactions [...] Read more.
To fully harness the potential of wind turbine systems and meet high power demands while maintaining top-notch power quality, wind farm managers run their systems 24 h a day/7 days a week. However, due to the system’s large size and the complex interactions of its many components operating at high power, frequent critical failures occur. As a result, it has become increasingly important to implement predictive maintenance to ensure the continued performance of these systems. This paper introduces an innovative approach to developing a head-mounted fault display system that integrates predictive capabilities, including deep learning long short-term memory neural networks model integration, with anomaly explanations for efficient predictive maintenance tasks. Then, a 3D virtual model, created from sampled and recorded data coupled with the deep neural diagnoser model, is designed. To generate a transparent and understandable explanation of the anomaly, we propose a novel methodology to identify a possible subset of characteristic variables for accurately describing the behavior of a group of components. Depending on the presence and risk level of an anomaly, the parameter concerned is displayed in a piece of specific information. The system then provides human operators with quick, accurate insights into anomalies and their potential causes, enabling them to take appropriate action. By applying this methodology to a wind farm dataset provided by Energias De Portugal, we aim to support maintenance managers in making informed decisions about inspection, replacement, and repair tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence-Enabled Internet of Things (IoT))
Show Figures

Figure 1

16 pages, 948 KiB  
Article
Masketeer: An Ensemble-Based Pseudonymization Tool with Entity Recognition for German Unstructured Medical Free Text
by Martin Baumgartner, Karl Kreiner, Fabian Wiesmüller, Dieter Hayn, Christian Puelacher and Günter Schreier
Future Internet 2024, 16(8), 281; https://doi.org/10.3390/fi16080281 - 6 Aug 2024
Viewed by 797
Abstract
Background: The recent rise of large language models has triggered renewed interest in medical free text data, which holds critical information about patients and diseases. However, medical free text is also highly sensitive. Therefore, de-identification is typically required but is complicated since medical [...] Read more.
Background: The recent rise of large language models has triggered renewed interest in medical free text data, which holds critical information about patients and diseases. However, medical free text is also highly sensitive. Therefore, de-identification is typically required but is complicated since medical free text is mostly unstructured. With the Masketeer algorithm, we present an effective tool to de-identify German medical text. Methods: We used an ensemble of different masking classes to remove references to identifiable data from over 35,000 clinical notes in accordance with the HIPAA Safe Harbor Guidelines. To retain additional context for readers, we implemented an entity recognition scheme and corpus-wide pseudonymization. Results: The algorithm performed with a sensitivity of 0.943 and specificity of 0.933. Further performance analyses showed linear runtime complexity (O(n)) with both increasing text length and corpus size. Conclusions: In the future, large language models will likely be able to de-identify medical free text more effectively and thoroughly than handcrafted rules. However, such gold-standard de-identification tools based on large language models are yet to emerge. In the current absence of such, we hope to provide best practices for a robust rule-based algorithm designed with expert domain knowledge. Full article
(This article belongs to the Special Issue eHealth and mHealth)
Show Figures

Figure 1

15 pages, 3559 KiB  
Article
Advanced Denoising and Meta-Learning Techniques for Enhancing Smart Health Monitoring Using Wearable Sensors
by Minyechil Alehegn Tefera, Amare Mulatie Dehnaw, Yibeltal Chanie Manie, Cheng-Kai Yao, Shegaw Demessie Bogale and Peng-Chun Peng
Future Internet 2024, 16(8), 280; https://doi.org/10.3390/fi16080280 - 5 Aug 2024
Viewed by 1117
Abstract
This study introduces a novel meta-learning method to enhance diabetes detection using wearable sensor systems in smart health applications. Wearable sensor technology often needs to operate accurately across a wide range of users, each characterized by unique physiological and behavioral patterns. However, the [...] Read more.
This study introduces a novel meta-learning method to enhance diabetes detection using wearable sensor systems in smart health applications. Wearable sensor technology often needs to operate accurately across a wide range of users, each characterized by unique physiological and behavioral patterns. However, the specific data for a particular application or user group might be scarce. Moreover, collecting extensive training data from wearable sensor experiments is challenging, time-consuming, and expensive. In these cases, meta-learning can be particularly useful. This model can quickly adapt to the nuances of new users or specific applications with minimal data. Therefore, to solve the need for a huge amount of training data and to enable the application of artificial intelligence (AI) in data-scarce scenarios, a meta-learning method is proposed. This meta-learning model has been implemented to forecast diabetes, resolve cross-talk issues, and accurately detect R peaks from overlapping electrocardiogram (ECG) signals affected by movement artifacts, poor electrode contact, electrical interference, or muscle activity. Motion artifacts from body movements, external conditions such as temperature, humidity, and electromagnetic interference, and the inherent quality and calibration of the sensor can all contribute to noise. Contact quality between the sensor and the skin, signal processing errors, power supply variations, user-generated interference from activities like talking or exercising, and the materials used in the wearable device also play significant roles in the overall noise in wearable sensor data and can significantly distort the true signal, leading to erroneous interpretations and potential diagnostic errors. Furthermore, discrete wavelet transform (DWT) was also implemented to improve the quality of the data and enhance the performance of the proposed model. The demonstrated results confirmed that with only a limited amount of target data, the proposed meta-learning and DWT denoising method can adapt more quickly and improve the detection of diabetes compared to the traditional method. Therefore, the proposed system is cost-effective, flexible, faster, and adaptable, reduces the need for training data, and can enhance the accuracy of chronic disease detection such as diabetes for smart health systems. Full article
Show Figures

Figure 1

23 pages, 5717 KiB  
Article
Virtual Reality in the Classroom: Transforming the Teaching of Electrical Circuits in the Digital Age
by Diego Alejandro Albarracin-Acero, Fidel Alfonso Romero-Toledo, Claudia Esperanza Saavedra-Bautista and Edwan Anderson Ariza-Echeverri
Future Internet 2024, 16(8), 279; https://doi.org/10.3390/fi16080279 - 5 Aug 2024
Viewed by 1056
Abstract
In response to the digital transformation in education, this study explores the efficacy of virtual reality (VR) video games in teaching direct current electrical circuits at a public university in Colombia. Using a mixed-method action research approach, this study aimed to design, implement, [...] Read more.
In response to the digital transformation in education, this study explores the efficacy of virtual reality (VR) video games in teaching direct current electrical circuits at a public university in Colombia. Using a mixed-method action research approach, this study aimed to design, implement, and evaluate a VR-based educational strategy to enhance undergraduate learning experiences. The methodology integrated VR into the curriculum, facilitating a comparison of this innovative approach with traditional teaching methods. The results indicate that the VR strategy significantly improved students’ comprehension of electrical circuits and increased engagement, demonstrating the utility of immersive technologies in educational settings. Challenges such as the need for technological integration and curriculum adaptation were also identified. This study concludes that VR video games can effectively augment electrical engineering education, offering a model for incorporating advanced digital tools into higher education curricula. This approach aligns with ongoing trends in digital transformation, suggesting significant potential for broad applications across various educational contexts. Full article
Show Figures

Figure 1

19 pages, 1076 KiB  
Article
TRUST-ME: Trust-Based Resource Allocation and Server Selection in Multi-Access Edge Computing
by Sean Tsikteris, Aisha B Rahman, Md. Sadman Siraj and Eirini Eleni Tsiropoulou
Future Internet 2024, 16(8), 278; https://doi.org/10.3390/fi16080278 - 4 Aug 2024
Viewed by 965
Abstract
Multi-access edge computing (MEC) has attracted the interest of the research and industrial community to support Internet of things (IoT) applications by enabling efficient data processing and minimizing latency. This paper presents significant contributions toward optimizing the resource allocation and enhancing the decision-making [...] Read more.
Multi-access edge computing (MEC) has attracted the interest of the research and industrial community to support Internet of things (IoT) applications by enabling efficient data processing and minimizing latency. This paper presents significant contributions toward optimizing the resource allocation and enhancing the decision-making process in edge computing environments. Specifically, the TRUST-ME model is introduced, which consists of multiple edge servers and IoT devices, i.e., users, with varied computing tasks offloaded to the MEC servers. A utility function was designed to quantify the benefits in terms of latency and cost for the IoT device while utilizing the MEC servers’ computing capacities. The core innovation of our work is a novel trust model that was designed to evaluate the IoT devices’ confidence in MEC servers. This model integrates both direct and indirect trust and reflects the trustworthiness of the servers based on the direct interactions and social feedback from other devices using the same servers. This dual trust approach helps with accurately gauging the reliability of MEC services and ensuring more informed decision making. A reinforcement learning framework based on the optimistic Q-learning with an upper confidence bounds action selection algorithm enables the IoT devices to autonomously select a MEC server to process their computing tasks. Also, a multilateral bargaining model is proposed for fair resource allocation of the MEC servers’ computing resources to the users while accounting for their computing demands. Numerical simulations demonstrated the operational effectiveness, convergence, and scalability of the TRUST-ME model, which was validated through real-world scenarios and comprehensive comparative evaluations against existing approaches. Full article
Show Figures

Figure 1

24 pages, 1788 KiB  
Article
Machine Learning-Assisted Dynamic Proximity-Driven Sorting Algorithm for Supermarket Navigation Optimization: A Simulation-Based Validation
by Vincent Abella, Johnfil Initan, Jake Mark Perez, Philip Virgil Astillo, Luis Gerardo Cañete, Jr. and Gaurav Choudhary
Future Internet 2024, 16(8), 277; https://doi.org/10.3390/fi16080277 - 2 Aug 2024
Viewed by 843
Abstract
In-store grocery shopping is still widely preferred by consumers despite the rising popularity of online grocery shopping. Moreover, hardware-based in-store navigation systems and shopping list applications such as Walmart’s Store Map, Kroger’s Kroger Edge, and Amazon Go have been developed by supermarkets to [...] Read more.
In-store grocery shopping is still widely preferred by consumers despite the rising popularity of online grocery shopping. Moreover, hardware-based in-store navigation systems and shopping list applications such as Walmart’s Store Map, Kroger’s Kroger Edge, and Amazon Go have been developed by supermarkets to address the inefficiencies in shopping. But even so, the current systems’ cost-effectiveness, optimization capability, and scalability are still an issue. In order to address the existing problems, this study investigates the optimization of grocery shopping by proposing a proximity-driven dynamic sorting algorithm with the assistance of machine learning. This research method provides us with an analysis of the impact and effectiveness of the two machine learning models or ML-DProSA variants—agglomerative hierarchical and affinity propagation clustering algorithms—in different setups and configurations on the performance of the grocery shoppers in a simulation environment patterned from the actual supermarket. The unique shopping patterns of a grocery shopper and the proximity of items based on timestamps are utilized in sorting grocery items, consequently reducing the distance traveled. Our findings reveal that both algorithms reduce dwell times for grocery shoppers compared to having an unsorted grocery shopping list. Ultimately, this research with the ML-DProSA’s optimization capabilities aims to be the foundation in providing a mobile application for grocery shopping in any grocery stores. Full article
Show Figures

Figure 1

33 pages, 4252 KiB  
Article
Artificial Intelligence of Things as New Paradigm in Aviation Health Monitoring Systems
by Igor Kabashkin and Leonid Shoshin
Future Internet 2024, 16(8), 276; https://doi.org/10.3390/fi16080276 - 2 Aug 2024
Cited by 2 | Viewed by 4209
Abstract
The integration of artificial intelligence of things (AIoT) is transforming aviation health monitoring systems by combining extensive data collection with advanced analytical capabilities. This study proposes a framework that enhances predictive accuracy, operational efficiency, and safety while optimizing maintenance strategies and reducing costs. [...] Read more.
The integration of artificial intelligence of things (AIoT) is transforming aviation health monitoring systems by combining extensive data collection with advanced analytical capabilities. This study proposes a framework that enhances predictive accuracy, operational efficiency, and safety while optimizing maintenance strategies and reducing costs. Utilizing a three-tiered cloud architecture, the AIoT system enables real-time data acquisition from sensors embedded in aircraft systems, followed by machine learning algorithms to analyze and interpret the data for proactive decision-making. This research examines the evolution from traditional to AIoT-enhanced monitoring, presenting a comprehensive architecture integrated with satellite communication and 6G technology. The mathematical models quantifying the benefits of increased diagnostic depth through AIoT, covering aspects such as predictive accuracy, cost savings, and safety improvements are introduced in this paper. The findings emphasize the strategic importance of investing in AIoT technologies to balance cost, safety, and efficiency in aviation maintenance and operations, marking a paradigm shift from traditional health monitoring to proactive health management in aviation. Full article
(This article belongs to the Special Issue Artificial Intelligence and Blockchain Technology for Smart Cities)
Show Figures

Figure 1

22 pages, 914 KiB  
Article
Estimating Interception Density in the BB84 Protocol: A Study with a Noisy Quantum Simulator
by Francesco Fiorini, Michele Pagano, Rosario Giuseppe Garroppo and Antonio Osele
Future Internet 2024, 16(8), 275; https://doi.org/10.3390/fi16080275 - 2 Aug 2024
Viewed by 4576
Abstract
Quantum computers have the potential to break the public-key cryptosystems widely used in key exchange and digital signature applications. To address this issue, quantum key distribution (QKD) offers a robust countermeasure against quantum computer attacks. Among various QKD schemes, BB84 is the most [...] Read more.
Quantum computers have the potential to break the public-key cryptosystems widely used in key exchange and digital signature applications. To address this issue, quantum key distribution (QKD) offers a robust countermeasure against quantum computer attacks. Among various QKD schemes, BB84 is the most widely used and studied. However, BB84 implementations are inherently imperfect, resulting in quantum bit error rates (QBERs) even in the absence of eavesdroppers. Distinguishing between QBERs caused by eavesdropping and QBERs due to channel imperfections is fundamentally infeasible. In this context, this paper proposes and examines a practical method for detecting eavesdropping via partial intercept-and-resend attacks in the BB84 protocol. A key feature of the proposed method is its consideration of quantum system noise. The efficacy of this method is assessed by employing the Quantum Solver library in conjunction with backend simulators inspired by real quantum machines that model quantum system noise. The simulation outcomes demonstrate the method’s capacity to accurately estimate the eavesdropper’s interception density in the presence of system noise. Moreover, the results indicate that the estimation accuracy of the eavesdropper’s interception density in the presence of system noise is dependent on both the actual interception density value and the key length. Full article
Show Figures

Figure 1

13 pages, 550 KiB  
Article
Dynamic Storage Optimization for Communication between AI Agents
by Andrei Tara, Hjalmar K. Turesson and Nicolae Natea
Future Internet 2024, 16(8), 274; https://doi.org/10.3390/fi16080274 - 1 Aug 2024
Viewed by 1043
Abstract
Today, AI is primarily narrow, meaning that each model or agent can only perform one task or a narrow range of tasks. However, systems with broad capabilities can be built by connecting multiple narrow AIs. Connecting various AI agents in an open, multi-organizational [...] Read more.
Today, AI is primarily narrow, meaning that each model or agent can only perform one task or a narrow range of tasks. However, systems with broad capabilities can be built by connecting multiple narrow AIs. Connecting various AI agents in an open, multi-organizational environment requires a new communication model. Here, we develop a multi-layered ontology-based communication framework. Ontology concepts provide semantic definitions for the agents’ inputs and outputs, enabling them to dynamically identify communication requirements and build processing pipelines. Critical is that the ontology concepts are stored on a decentralized storage medium, allowing fast reading and writing. The multi-layered design offers flexibility by dividing a monolithic ontology model into semantic layers, allowing for the optimization of read and write latencies. We investigate the impact of this optimization by benchmarking experiments on three decentralized storage mediums—IPFS, Tendermint Cosmos, and Hyperledger Fabric—across a wide range of configurations. The increased read-write speeds allow AI agents to communicate efficiently in a decentralized environment utilizing ontology principles, making it easier for AI to be used widely in various applications. Full article
Show Figures

Figure 1

18 pages, 339 KiB  
Article
Modeling Trust in IoT Systems for Drinking-Water Management
by Aicha Aiche, Pierre-Martin Tardif and Mohammed Erritali
Future Internet 2024, 16(8), 273; https://doi.org/10.3390/fi16080273 - 30 Jul 2024
Viewed by 963
Abstract
This study focuses on trust within water-treatment IoT plants, examining the collaboration between IoT devices, control systems, and skilled personnel. The main aim is to assess the levels of trust between these different critical elements based on specific criteria and to emphasize that [...] Read more.
This study focuses on trust within water-treatment IoT plants, examining the collaboration between IoT devices, control systems, and skilled personnel. The main aim is to assess the levels of trust between these different critical elements based on specific criteria and to emphasize that trust is neither bidirectional nor transitive. To this end, we have developed a synthetic database representing the critical elements in the system, taking into account characteristics such as accuracy, reliability, and experience. Using a mathematical model based on the (AHP), we calculated levels of trust between these critical elements, taking into account temporal dynamics and the non-bidirectional nature of trust. Our experiments included anomalous scenarios, such as sudden fluctuations in IoT device reliability and significant variations in staff experience. These variations were incorporated to assess the robustness of our approach. The trust levels obtained provide a detailed insight into the relationships between critical elements, enhancing our understanding of trust in the context of water-treatment plants. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop