Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (119)

Search Parameters:
Keywords = Kubernetes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2148 KiB  
Article
Enhancing Microservice Security Through Vulnerability-Driven Trust in the Service Mesh Architecture
by Rami Alboqmi and Rose F. Gamble
Sensors 2025, 25(3), 914; https://doi.org/10.3390/s25030914 - 3 Feb 2025
Viewed by 309
Abstract
Cloud-native computing enhances the deployment of microservice architecture (MSA) applications by improving scalability and resilience, particularly in Beyond 5G (B5G) environments such as Sixth-Generation (6G) networks. This is achieved through the ability to replace traditional hardware dependencies with software-defined solutions. While service meshes [...] Read more.
Cloud-native computing enhances the deployment of microservice architecture (MSA) applications by improving scalability and resilience, particularly in Beyond 5G (B5G) environments such as Sixth-Generation (6G) networks. This is achieved through the ability to replace traditional hardware dependencies with software-defined solutions. While service meshes enable secure communication for deployed MSAs, they struggle to identify vulnerabilities inherent to microservices. The reliance on third-party libraries and modules, essential for MSAs, introduces significant supply chain security risks. Implementing a zero-trust approach for MSAs requires robust mechanisms to continuously verify and monitor the software supply chain of deployed microservices. However, existing service mesh solutions lack runtime trust evaluation capabilities for continuous vulnerability assessment of third-party libraries and modules. This paper introduces a mechanism for continuous runtime trust evaluation of microservices, integrating vulnerability assessments within a service mesh to enhance the deployed MSA application. The proposed approach dynamically assigns trust scores to deployed microservices, rewarding secure practices such as timely vulnerability patching. It also enables the sharing of assessment results, enhancing mitigation strategies across the deployed MSA application. The mechanism is evaluated using the Train Ticket MSA, a complex open-source benchmark MSA application deployed with Docker containers, orchestrated using Kubernetes, and integrated with the Istio service mesh. Results demonstrate that the enhanced service mesh effectively supports dynamic trust evaluation based on the vulnerability posture of deployed microservices, significantly improving MSA security and paving the way for future self-adaptive solutions. Full article
Show Figures

Figure 1

34 pages, 16504 KiB  
Article
Vehicle-to-Everything-Car Edge Cloud Management with Development, Security, and Operations Automation Framework
by DongHwan Ku, Hannie Zang, Anvarjon Yusupov, Sun Park and JongWon Kim
Electronics 2025, 14(3), 478; https://doi.org/10.3390/electronics14030478 - 24 Jan 2025
Viewed by 395
Abstract
Modern autonomous driving and intelligent transportation systems face critical challenges in managing real-time data processing, network latency, and security threats across distributed vehicular environments. Conventional cloud-centric architectures typically struggle to meet the low-latency and high-reliability requirements of vehicle-to-everything (V2X) applications, particularly in dynamic [...] Read more.
Modern autonomous driving and intelligent transportation systems face critical challenges in managing real-time data processing, network latency, and security threats across distributed vehicular environments. Conventional cloud-centric architectures typically struggle to meet the low-latency and high-reliability requirements of vehicle-to-everything (V2X) applications, particularly in dynamic and resource-constrained edge environments. To address these challenges, this study introduces the V2X-Car Edge Cloud system, which is a cloud-native architecture driven by DevSecOps principles to ensure secure deployment, dynamic resource orchestration, and real-time monitoring across distributed edge nodes. The proposed system integrates multicluster orchestration with Kubernetes, hybrid communication protocols (C-V2X, 5G, and WAVE), and data-fusion pipelines to enhance transparency in artificial intelligence (AI)-driven decision making. A software-in-the-loop simulation environment was implemented to validate AI models, and the SmartX MultiSec framework was integrated into the proposed system to dynamically monitor network traffic flow and security. Experimental evaluations in a virtual driving environment demonstrate the ability of the proposed system to perform automated security updates, continuous performance monitoring, and dynamic resource allocation without manual intervention. Full article
(This article belongs to the Special Issue Cloud Computing, IoT, and Big Data: Technologies and Applications)
15 pages, 2904 KiB  
Perspective
IoT, Blockchain, Big Data and Artificial Intelligence (IBBA) Framework—For Real-Time Food Safety Monitoring
by Siva Peddareddigari, Sri Vigna Hema Vijayan and Manickavasagan Annamalai
Appl. Sci. 2025, 15(1), 105; https://doi.org/10.3390/app15010105 - 26 Dec 2024
Viewed by 757
Abstract
Technological advancements in mechanized food production have expanded markets beyond geographical boundaries. At the same time, the risk of contamination has increased severalfold, often resulting in significant damage in terms of food wastage, economic loss to the producers, danger to public health, or [...] Read more.
Technological advancements in mechanized food production have expanded markets beyond geographical boundaries. At the same time, the risk of contamination has increased severalfold, often resulting in significant damage in terms of food wastage, economic loss to the producers, danger to public health, or all of these. In general, governments across the world have recognized the importance of having food safety processes in place to impose food recalls as required. However, the primary challenges to the existing practices are delays in identifying unsafe food, siloed data handling, delayed decision making, and tracing the source of contamination. Leveraging the Internet of Things (IoT), 5G, blockchains, cloud computing, and big data, a novel framework has been proposed to address the current challenges. The framework enables real-time data gathering and in situ application of machine learning-powered algorithms to predict contamination and facilitate instant decision making. Since the data are processed in real time, the proposed approach enables contamination to be identified early and informed decisions to be made confidently, thereby helping to reduce damage significantly. The proposed approach also throws up new challenges in terms of the implementation of changes to data collection across all phases of food production, onboarding various stockholders, and adaptation to a new process. Full article
Show Figures

Figure 1

20 pages, 5140 KiB  
Article
Distribution-Based Approach for Efficient Storage and Indexing of Massive Infrared Hyperspectral Sounding Data
by Han Li, Mingjian Gu, Guang Shi, Yong Hu and Mengzhen Xie
Remote Sens. 2024, 16(21), 4088; https://doi.org/10.3390/rs16214088 - 1 Nov 2024
Viewed by 818
Abstract
Hyperspectral infrared atmospheric sounding data, characterized by their high vertical resolution, play a crucial role in capturing three-dimensional atmospheric spatial information. The hyperspectral infrared atmospheric detectors HIRAS/HIRAS-II, mounted on the FY3D/EF satellite, have established an initial global coverage network for atmospheric sounding. The [...] Read more.
Hyperspectral infrared atmospheric sounding data, characterized by their high vertical resolution, play a crucial role in capturing three-dimensional atmospheric spatial information. The hyperspectral infrared atmospheric detectors HIRAS/HIRAS-II, mounted on the FY3D/EF satellite, have established an initial global coverage network for atmospheric sounding. The collaborative observation approach involving multiple satellites will improve both the coverage and responsiveness of data acquisition, thereby enhancing the overall quality and reliability of the data. In response to the increasing number of channels, the rapid growth of data volume, and the specific requirements of multi-satellite joint observation applications with infrared hyperspectral sounding data, this paper introduces an efficient storage and indexing method for infrared hyperspectral sounding data within a distributed architecture for the first time. The proposed approach, built on the Kubernetes cloud platform, utilizes the Google S2 discrete grid spatial indexing algorithm to establish a grid-based hierarchical model for unified metadata-embedded documents. Additionally, it optimizes the rowkey design using the BPDS model, thereby enabling the distributed storage of data in HBase. The experimental results demonstrate that the query efficiency of the Google S2 grid-based embedded document model is superior to that of the traditional flat model, achieving a query time that is only 35.6% of the latter for a dataset of 5 million records. Additionally, this method exhibits better data distribution characteristics within the global grid compared to the H3 algorithm. Leveraging the BPDS model, the HBase distributed storage system adeptly balances the node load and counteracts the detrimental effects caused by the accumulation of time-series remote sensing images. This architecture significantly enhances both storage and query efficiency, thus laying a robust foundation for forthcoming distributed computing. Full article
Show Figures

Figure 1

19 pages, 1073 KiB  
Article
Methodology for Automating and Orchestrating Performance Evaluation of Kubernetes Container Network Interfaces
by Vedran Dakić, Jasmin Redžepagić, Matej Bašić and Luka Žgrablić
Computers 2024, 13(11), 283; https://doi.org/10.3390/computers13110283 - 1 Nov 2024
Viewed by 1297
Abstract
Maintaining a fast, low-latency network must be balanced in the demanding world of High-Performance Computing (HPC). Any compromise in network performance can severely affect distributed HPC applications, leading to bottlenecks that undermine the entire system’s efficiency. This paper highlights the critical need for [...] Read more.
Maintaining a fast, low-latency network must be balanced in the demanding world of High-Performance Computing (HPC). Any compromise in network performance can severely affect distributed HPC applications, leading to bottlenecks that undermine the entire system’s efficiency. This paper highlights the critical need for precise and consistent evaluation of Kubernetes Container Network Interfaces (CNIs) to ensure that HPC workloads can operate at their full potential. Traditional manual methods for evaluating network bandwidth and latency are time-consuming and prone to errors, making them inadequate for the rigorous demands of HPC environments. To address this, we introduce a novel approach that leverages Ansible to automate and standardize the evaluation process across diverse CNIs, performance profiles, and configurations. By eliminating human error and ensuring replicability, this method significantly enhances the reliability of performance assessments. The Ansible playbooks we developed enable the efficient deployment, configuration, and execution of CNIs and evaluations, providing a robust framework for ensuring that Kubernetes-based infrastructures can meet the stringent performance requirements of HPC applications. This approach is vital for safeguarding the performance integrity of HPC workloads, ensuring that inadequate network configurations do not cripple them. Full article
Show Figures

Figure 1

24 pages, 2796 KiB  
Article
Performance and Latency Efficiency Evaluation of Kubernetes Container Network Interfaces for Built-In and Custom Tuned Profiles
by Vedran Dakić, Jasmin Redžepagić, Matej Bašić and Luka Žgrablić
Electronics 2024, 13(19), 3972; https://doi.org/10.3390/electronics13193972 - 9 Oct 2024
Cited by 1 | Viewed by 1891
Abstract
In the era of DevOps, developing new toolsets and frameworks that leverage DevOps principles is crucial. This paper demonstrates how Ansible’s powerful automation capabilities can be harnessed to manage the complexity of Kubernetes environments. This paper evaluates efficiency across various CNI (Container Network [...] Read more.
In the era of DevOps, developing new toolsets and frameworks that leverage DevOps principles is crucial. This paper demonstrates how Ansible’s powerful automation capabilities can be harnessed to manage the complexity of Kubernetes environments. This paper evaluates efficiency across various CNI (Container Network Interface) plugins by orchestrating performance analysis tools across multiple power profiles. Our performance evaluations across network interfaces with different theoretical bandwidths gave us a comprehensive understanding of CNI performance and overall efficiency, with performance efficiency coming well below expectations. Our research confirms that certain CNIs are better suited for specific use cases, mainly when tuning our environment for smaller or larger network packets and workload types, but also that there are configuration changes we can make to mitigate that. This paper also provides research into how to use performance tuning to optimize the performance and efficiency of our CNI infrastructure, with practical implications for improving the performance of Kubernetes environments in real-world scenarios, particularly in more demanding scenarios such as High-Performance Computing (HPC) and Artificial Intelligence (AI). Full article
(This article belongs to the Special Issue Software-Defined Cloud Computing: Latest Advances and Prospects)
Show Figures

Figure 1

21 pages, 4248 KiB  
Article
OOSP: Opportunistic Optimization Scheme for Pod Deployment Enhanced with Multilayered Sensing
by Joo-Young Roh, Sang-Hoon Choi and Ki-Woong Park
Sensors 2024, 24(19), 6244; https://doi.org/10.3390/s24196244 - 26 Sep 2024
Viewed by 773
Abstract
In modern cloud environments, container orchestration tools are essential for effectively managing diverse workloads and services, and Kubernetes has become the de facto standard tool for automating the deployment, scaling, and operation of containerized applications. While Kubernetes plays an important role in optimizing [...] Read more.
In modern cloud environments, container orchestration tools are essential for effectively managing diverse workloads and services, and Kubernetes has become the de facto standard tool for automating the deployment, scaling, and operation of containerized applications. While Kubernetes plays an important role in optimizing and managing the deployment of diverse services and applications, its default scheduling approach, which is not optimized for all types of workloads, can often result in poor performance and wasted resources. This is particularly true in environments with complex interactions between services, such as microservice architectures. The traditional Kubernetes scheduler makes scheduling decisions based on CPU and memory usage, but the limitation of this arrangement is that it does not fully account for the performance and resource efficiency of the application. As a result, the communication latency between services increases, and the overall system performance suffers. Therefore, a more sophisticated and adaptive scheduling method is required. In this work, we propose an adaptive pod placement optimization technique using multi-tier inspection to address these issues. The proposed technique collects and analyzes multi-tier data to improve application performance and resource efficiency, which are overlooked by the default Kubernetes scheduler. It derives optimal placements based on the coupling and dependencies between pods, resulting in more efficient resource usage and better performance. To validate the performance of the proposed method, we configured a Kubernetes cluster in a virtualized environment and conducted experiments using a benchmark application with a microservice architecture. The experimental results show that the proposed method outperforms the existing Kubernetes scheduler, reducing the average response time by up to 11.5% and increasing the number of requests processed per second by up to 10.04%. This indicates that the proposed method minimizes the inter-pod communication delay and improves the system-wide resource utilization. This research aims to optimize application performance and increase resource efficiency in cloud-native environments, and the proposed technique can be applied to different cloud environments and workloads in the future to provide more generalized optimizations. This is expected to contribute to increasing the operational efficiency of cloud infrastructure and improving the quality of service. Full article
Show Figures

Figure 1

15 pages, 474 KiB  
Article
Federated Learning in Dynamic and Heterogeneous Environments: Advantages, Performances, and Privacy Problems
by Fabio Liberti, Davide Berardi and Barbara Martini
Appl. Sci. 2024, 14(18), 8490; https://doi.org/10.3390/app14188490 - 20 Sep 2024
Viewed by 2506
Abstract
Federated Learning (FL) represents a promising distributed learning methodology particularly suitable for dynamic and heterogeneous environments characterized by the presence of Internet of Things (IoT) devices and Edge Computing infrastructures. In this context, FL allows you to train machine learning models directly on [...] Read more.
Federated Learning (FL) represents a promising distributed learning methodology particularly suitable for dynamic and heterogeneous environments characterized by the presence of Internet of Things (IoT) devices and Edge Computing infrastructures. In this context, FL allows you to train machine learning models directly on edge devices, mitigating data privacy concerns and reducing latency due to transmitting data to central servers. However, the heterogeneity of computational resources, the variability of network connections, and the mobility of IoT devices pose significant challenges to the efficient implementation of FL. This work explores advanced techniques for dynamic model adaptation and heterogeneous data management in edge computing scenarios, proposing innovative solutions to improve the robustness and efficiency of federated learning. We present an innovative solution based on Kubernetes which enables the fast application of FL models to Heterogeneous Architectures. Experimental results demonstrate that our proposals can improve the performance of FL in IoT and edge environments, offering new perspectives for the practical implementation of decentralized intelligent systems. Full article
Show Figures

Figure 1

21 pages, 431 KiB  
Article
Application of Proximal Policy Optimization for Resource Orchestration in Serverless Edge Computing
by Mauro Femminella and Gianluca Reali
Computers 2024, 13(9), 224; https://doi.org/10.3390/computers13090224 - 6 Sep 2024
Viewed by 1191
Abstract
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced [...] Read more.
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced latency and efficient resource usage, both typical requirements of edge-hosted services. However, a badly configured scaling function can introduce unexpected latency due to so-called “cold start” events or service request losses. In this work, we focus on the optimization of resource-based autoscaling on OpenFaaS, the most-adopted open-source Kubernetes-based serverless platform, leveraging real-world serverless traffic traces. We resort to the reinforcement learning algorithm named Proximal Policy Optimization to dynamically configure the value of the Kubernetes Horizontal Pod Autoscaler, trained on real traffic. This was accomplished via a state space model able to take into account resource consumption, performance values, and time of day. In addition, the reward function definition promotes Service-Level Agreement (SLA) compliance. We evaluate the proposed agent, comparing its performance in terms of average latency, CPU usage, memory usage, and loss percentage with respect to the baseline system. The experimental results show the benefits provided by the proposed agent, obtaining a service time within the SLA while limiting resource consumption and service loss. Full article
(This article belongs to the Special Issue Advances in High-Performance Switching and Routing)
Show Figures

Graphical abstract

22 pages, 8922 KiB  
Article
A Novel Framework for Cross-Cluster Scaling in Cloud-Native 5G NextGen Core
by Oana-Mihaela Dumitru-Guzu, Vlădeanu Călin and Robert Kooij
Future Internet 2024, 16(9), 325; https://doi.org/10.3390/fi16090325 - 6 Sep 2024
Viewed by 1132
Abstract
Cloud-native technologies are widely considered the ideal candidates for the future of vertical application development due to their boost in flexibility, scalability, and especially cost efficiency. Since multi-site support is paramount for 5G, we employ a multi-cluster model that scales on demand, shifting [...] Read more.
Cloud-native technologies are widely considered the ideal candidates for the future of vertical application development due to their boost in flexibility, scalability, and especially cost efficiency. Since multi-site support is paramount for 5G, we employ a multi-cluster model that scales on demand, shifting the boundaries of both horizontal and vertical scaling for shared resources. Our approach is based on the liquid computing paradigm, which has the benefit of adapting to the changing environment. Despite being a decentralized deployment shared across data centers, the 5G mobile core can be managed as a single cluster entity running in a public cloud. We achieve this by following the cloud-native patterns for declarative configuration based on Kubernetes APIs and on-demand resource allocation. Moreover, in our setup, we analyze the offloading of both the Open5GS user and control plane functions under two different peering scenarios. A significant improvement in terms of latency and throughput is achieved for the in-band peering, considering the traffic between clusters is ensured by the Liqo control plane through a VPN tunnel. We also validate three end-to-end network slicing use cases, showcasing the full 5G core automation and leveraging the capabilities of Kubernetes multi-cluster deployments and inter-service monitoring through the applied service mesh solution. Full article
Show Figures

Figure 1

28 pages, 5453 KiB  
Article
Evaluating ARM and RISC-V Architectures for High-Performance Computing with Docker and Kubernetes
by Vedran Dakić, Leo Mršić, Zdravko Kunić and Goran Đambić
Electronics 2024, 13(17), 3494; https://doi.org/10.3390/electronics13173494 - 3 Sep 2024
Viewed by 4557
Abstract
This paper thoroughly assesses the ARM and RISC-V architectures in the context of high-performance computing (HPC). It includes an analysis of Docker and Kubernetes integration. Our study aims to evaluate and compare these systems’ performance, scalability, and practicality in a general context and [...] Read more.
This paper thoroughly assesses the ARM and RISC-V architectures in the context of high-performance computing (HPC). It includes an analysis of Docker and Kubernetes integration. Our study aims to evaluate and compare these systems’ performance, scalability, and practicality in a general context and then assess the impact they might have on special use cases, like HPC. ARM-based systems exhibited better performance and seamless integration with Docker and Kubernetes, underscoring their advanced development and effectiveness in managing high-performance computing workloads. On the other hand, despite their open-source architecture, RISC-V platforms presented considerable intricacy and difficulties in working with Kubernetes, which hurt their overall effectiveness and ease of management. The results of our study offer valuable insights into the practical consequences of implementing these architectures for HPC, highlighting ARM’s preparedness and the potential of RISC-V while acknowledging the increased complexity and significant trade-offs involved at this point. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 2522 KiB  
Article
Application of Fuzzy Logic for Horizontal Scaling in Kubernetes Environments within the Context of Edge Computing
by Sérgio N. Silva, Mateus A. S. de S. Goldbarg, Lucileide M. D. da Silva and Marcelo A. C. Fernandes
Future Internet 2024, 16(9), 316; https://doi.org/10.3390/fi16090316 - 2 Sep 2024
Cited by 1 | Viewed by 4317
Abstract
This paper presents a fuzzy logic-based approach for replica scaling in a Kubernetes environment, focusing on integrating Edge Computing. The proposed FHS (Fuzzy-based Horizontal Scaling) system was compared to the standard Kubernetes scaling mechanism, HPA (Horizontal Pod Autoscaler). The comparison considered resource consumption, [...] Read more.
This paper presents a fuzzy logic-based approach for replica scaling in a Kubernetes environment, focusing on integrating Edge Computing. The proposed FHS (Fuzzy-based Horizontal Scaling) system was compared to the standard Kubernetes scaling mechanism, HPA (Horizontal Pod Autoscaler). The comparison considered resource consumption, the number of replicas used, and adherence to latency Service-Level Agreements (SLAs). The experiments were conducted in an environment simulating Edge Computing infrastructure, with virtual machines used to represent edge nodes and traffic generated via JMeter. The results demonstrate that FHS achieves a reduction in CPU consumption, uses fewer replicas under the same stress conditions, and exhibits more distributed SLA latency violation rates compared to HPA. These results indicate that FHS offers a more efficient and customizable solution for replica scaling in Kubernetes within Edge Computing environments, contributing to both operational efficiency and service quality. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

26 pages, 1012 KiB  
Article
On the Optimization of Kubernetes toward the Enhancement of Cloud Computing
by Subrota Kumar Mondal, Zhen Zheng and Yuning Cheng
Mathematics 2024, 12(16), 2476; https://doi.org/10.3390/math12162476 - 10 Aug 2024
Cited by 1 | Viewed by 2055
Abstract
With the vigorous development of big data and cloud computing, containers are becoming the main platform for running applications due to their flexible and lightweight features. Using a container cluster management system can more effectively manage multiocean containers on multiple machine nodes, and [...] Read more.
With the vigorous development of big data and cloud computing, containers are becoming the main platform for running applications due to their flexible and lightweight features. Using a container cluster management system can more effectively manage multiocean containers on multiple machine nodes, and Kubernetes has become a leader in container cluster management systems, with its powerful container orchestration capabilities. However, the current default Kubernetes components and settings have appeared to have a performance bottleneck and are not adaptable to complex usage environments. In particular, the issues are data distribution latency, inefficient cluster backup and restore leading to poor disaster recovery, poor rolling update leading to downtime, inefficiency in load balancing and handling requests, poor autoscaling and scheduling strategy leading to quality of service (QoS) violations and insufficient resource usage, and many others. Aiming at the insufficient performance of the default Kubernetes platform, this paper focuses on reducing the data distribution latency, improving the cluster backup and restore strategies toward better disaster recovery, optimizing zero-downtime rolling updates, incorporating better strategies for load balancing and handling requests, optimizing autoscaling, introducing better scheduling strategy, and so on. At the same time, the relevant experimental analysis is carried out. The experiment results show that compared with the default settings, the optimized Kubernetes platform can handle more than 2000 concurrent requests, reduce the CPU overhead by more than 1.5%, reduce the memory by more than 0.6%, reduce the average request time by an average of 7.6%, and reduce the number of request failures by at least 32.4%, achieving the expected effect. Full article
(This article belongs to the Special Issue Advanced Computational Intelligence in Cloud/Edge Computing)
Show Figures

Figure 1

13 pages, 385 KiB  
Article
Availability, Scalability, and Security in the Migration from Container-Based to Cloud-Native Applications
by Bruno Nascimento, Rui Santos, João Henriques, Marco V. Bernardo and Filipe Caldeira
Computers 2024, 13(8), 192; https://doi.org/10.3390/computers13080192 - 9 Aug 2024
Viewed by 3260
Abstract
The shift from traditional monolithic architectures to container-based solutions has revolutionized application deployment by enabling consistent, isolated environments across various platforms. However, as organizations look for improved efficiency, resilience, security, and scalability, the limitations of container-based applications, such as their manual scaling, resource [...] Read more.
The shift from traditional monolithic architectures to container-based solutions has revolutionized application deployment by enabling consistent, isolated environments across various platforms. However, as organizations look for improved efficiency, resilience, security, and scalability, the limitations of container-based applications, such as their manual scaling, resource management challenges, potential single points of failure, and operational complexities, become apparent. These challenges, coupled with the need for sophisticated tools and expertise for monitoring and security, drive the move towards cloud-native architectures. Cloud-native approaches offer a more robust integration with cloud services, including managed databases and AI/ML services, providing enhanced agility and efficiency beyond what standalone containers can achieve. Availability, scalability, and security are the cornerstone requirements of these cloud-native applications. This work explores how containerized applications can be customized to address such requirements during their shift to cloud-native orchestrated environments. A Proof of Concept (PoC) demonstrated the technical aspects of such a move into a Kubernetes environment in Azure. The results from its evaluation highlighted the suitability of Kubernetes in addressing such a demand for availability and scalability while safeguarding security when moving containerized applications to cloud-native environments. Full article
Show Figures

Figure 1

26 pages, 664 KiB  
Article
Comparison of Reinforcement Learning Algorithms for Edge Computing Applications Deployed by Serverless Technologies
by Mauro Femminella and Gianluca Reali
Algorithms 2024, 17(8), 320; https://doi.org/10.3390/a17080320 - 23 Jul 2024
Cited by 1 | Viewed by 1524
Abstract
Edge computing is one of the technological areas currently considered among the most promising for the implementation of many types of applications. In particular, IoT-type applications can benefit from reduced latency and better data protection. However, the price typically to be paid in [...] Read more.
Edge computing is one of the technological areas currently considered among the most promising for the implementation of many types of applications. In particular, IoT-type applications can benefit from reduced latency and better data protection. However, the price typically to be paid in order to benefit from the offered opportunities includes the need to use a reduced amount of resources compared to the traditional cloud environment. Indeed, it may happen that only one computing node can be used. In these situations, it is essential to introduce computing and memory resource management techniques that allow resources to be optimized while still guaranteeing acceptable performance, in terms of latency and probability of rejection. For this reason, the use of serverless technologies, managed by reinforcement learning algorithms, is an active area of research. In this paper, we explore and compare the performance of some machine learning algorithms for managing horizontal function autoscaling in a serverless edge computing system. In particular, we make use of open serverless technologies, deployed in a Kubernetes cluster, to experimentally fine-tune the performance of the algorithms. The results obtained allow both the understanding of some basic mechanisms typical of edge computing systems and related technologies that determine system performance and the guiding of configuration choices for systems in operation. Full article
(This article belongs to the Special Issue Machine Learning for Edge Computing)
Show Figures

Figure 1

Back to TopTop