Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (66)

Search Parameters:
Keywords = serverless

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 9318 KiB  
Article
VidBlock: A Web3.0-Enabled Decentralized Blockchain Architecture for Live Video Streaming
by Hyunjoo Yang and Sejin Park
Appl. Sci. 2025, 15(3), 1289; https://doi.org/10.3390/app15031289 - 26 Jan 2025
Viewed by 534
Abstract
In the digital era, the demand for real-time streaming services highlights the scalability, data sovereignty, and privacy limitations of traditional centralized systems. VidBlock introduces a novel decentralized blockchain architecture that leverages the blockchain’s immutable and transparent characteristics along with direct communication capabilities. This [...] Read more.
In the digital era, the demand for real-time streaming services highlights the scalability, data sovereignty, and privacy limitations of traditional centralized systems. VidBlock introduces a novel decentralized blockchain architecture that leverages the blockchain’s immutable and transparent characteristics along with direct communication capabilities. This ecosystem revolutionizes content delivery and storage, ensuring high data integrity and user trust. VidBlock’s architecture emphasizes serverless operation, aligning with the principles of decentralization to enhance efficiency and reduce costs. Our contributions include decentralized data management, user-controlled privacy, cost reduction through a serverless architecture, and improved global accessibility. Experiments show that VidBlock is superior in reducing latency and utilizing bandwidth, demonstrating its potential to redefine live video streaming in the Web3.0 era. Full article
Show Figures

Figure 1

22 pages, 797 KiB  
Article
Analyzing the Features, Usability, and Performance of Deploying a Containerized Mobile Web Application on Serverless Cloud Platforms
by Jeong Yang and Anoop Abraham
Future Internet 2024, 16(12), 475; https://doi.org/10.3390/fi16120475 - 19 Dec 2024
Viewed by 587
Abstract
Serverless computing services are offered by major cloud service providers such as Google Cloud Platform, Amazon Web Services, and Microsoft Azure. The primary purpose of the services is to offer efficiency and scalability in modern software development and IT operations while reducing overall [...] Read more.
Serverless computing services are offered by major cloud service providers such as Google Cloud Platform, Amazon Web Services, and Microsoft Azure. The primary purpose of the services is to offer efficiency and scalability in modern software development and IT operations while reducing overall costs and operational complexity. However, prospective customers often question which serverless service will best meet their organizational and business needs. This study analyzed the features, usability, and performance of three serverless cloud computing platforms: Google Cloud’s Cloud Run, Amazon Web Service’s App Runner, and Microsoft Azure’s Container Apps. The analysis was conducted with a containerized mobile application designed to track real-time bus locations for San Antonio public buses on specific routes and provide estimated arrival times for selected bus stops. The study evaluated various system-related features, including service configuration, pricing, and memory and CPU capacity, along with performance metrics such as container latency, distance matrix API response time, and CPU utilization for each service. The results of the analysis revealed that Google’s Cloud Run demonstrated better performance and usability than AWS’s App Runner and Microsoft Azure’s Container Apps. Cloud Run exhibited lower latency and faster response time for distance matrix queries. These findings provide valuable insights for selecting an appropriate serverless cloud service for similar containerized web applications. Full article
(This article belongs to the Section Smart System Infrastructure and Applications)
Show Figures

Figure 1

21 pages, 431 KiB  
Article
Application of Proximal Policy Optimization for Resource Orchestration in Serverless Edge Computing
by Mauro Femminella and Gianluca Reali
Computers 2024, 13(9), 224; https://doi.org/10.3390/computers13090224 - 6 Sep 2024
Viewed by 1191
Abstract
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced [...] Read more.
Serverless computing is a new cloud computing model suitable for providing services in both large cloud and edge clusters. In edge clusters, the autoscaling functions play a key role on serverless platforms as the dynamic scaling of function instances can lead to reduced latency and efficient resource usage, both typical requirements of edge-hosted services. However, a badly configured scaling function can introduce unexpected latency due to so-called “cold start” events or service request losses. In this work, we focus on the optimization of resource-based autoscaling on OpenFaaS, the most-adopted open-source Kubernetes-based serverless platform, leveraging real-world serverless traffic traces. We resort to the reinforcement learning algorithm named Proximal Policy Optimization to dynamically configure the value of the Kubernetes Horizontal Pod Autoscaler, trained on real traffic. This was accomplished via a state space model able to take into account resource consumption, performance values, and time of day. In addition, the reward function definition promotes Service-Level Agreement (SLA) compliance. We evaluate the proposed agent, comparing its performance in terms of average latency, CPU usage, memory usage, and loss percentage with respect to the baseline system. The experimental results show the benefits provided by the proposed agent, obtaining a service time within the SLA while limiting resource consumption and service loss. Full article
(This article belongs to the Special Issue Advances in High-Performance Switching and Routing)
Show Figures

Graphical abstract

16 pages, 2069 KiB  
Article
Trading Cloud Computing Stocks Using SMA
by Xianrong Zheng and Lingyu Li
Information 2024, 15(8), 506; https://doi.org/10.3390/info15080506 - 21 Aug 2024
Viewed by 1170
Abstract
As cloud computing adoption becomes mainstream, the cloud services market offers vast profits. Moreover, serverless computing, the next stage of cloud computing, comes with huge economic potential. To capitalize on this trend, investors are interested in trading cloud stocks. As high-growth technology stocks, [...] Read more.
As cloud computing adoption becomes mainstream, the cloud services market offers vast profits. Moreover, serverless computing, the next stage of cloud computing, comes with huge economic potential. To capitalize on this trend, investors are interested in trading cloud stocks. As high-growth technology stocks, investing in cloud stocks is both rewarding and challenging. The research question here is how a trading strategy will perform on cloud stocks. As a result, this paper employs an effective method—Simple Moving Average (SMA)—to trade cloud stocks. To evaluate its performance, we conducted extensive experiments with real market data that spans over 23 years. Results show that SMA can achieve satisfying performance in terms of several measures, including MAE, RMSE, and R-squared. Full article
(This article belongs to the Special Issue Blockchain Applications for Business Process Management)
Show Figures

Figure 1

17 pages, 732 KiB  
Article
SMWE: A Framework for Secure and Makespan-Oriented Workflow Execution in Serverless Computing
by Hao Liang, Shuai Zhang, Xinlei Liu, Guozhen Cheng, Hailong Ma and Qingfeng Wang
Electronics 2024, 13(16), 3246; https://doi.org/10.3390/electronics13163246 - 15 Aug 2024
Viewed by 1080
Abstract
Serverless computing is a promising paradigm that greatly simplifies cloud programming. With serverless computing, developers simply provide event-driven functions to a serverless platform, and these functions can be orchestrated as serverless workflows to accomplish complex tasks. Due to the lightweight limitation of functions, [...] Read more.
Serverless computing is a promising paradigm that greatly simplifies cloud programming. With serverless computing, developers simply provide event-driven functions to a serverless platform, and these functions can be orchestrated as serverless workflows to accomplish complex tasks. Due to the lightweight limitation of functions, serverless workflows not only suffer from existing vulnerability-based threats but also face new security threats from the function compiling phase. In this paper, we present SMWE, a secure and makespan-oriented workflow execution framework in serverless computing. SMWE enables all life cycle protection for functions by adopting compiler shifting and running environment replacement in the serverless workflow. Furthermore, SMWE balances the tradeoff between security and makespan by carefully scheduling functions to running environments and selectively applying the secure techniques to functions. Extensive evaluations show that SMWE significantly increases the security of serverless workflows with small makespan cost. Full article
(This article belongs to the Special Issue Recent Advances and Applications of Network Security and Cryptography)
Show Figures

Figure 1

19 pages, 1196 KiB  
Article
AI-Driven QoS-Aware Scheduling for Serverless Video Analytics at the Edge
by Dimitrios Giagkos, Achilleas Tzenetopoulos, Dimosthenis Masouros, Sotirios Xydis, Francky Catthoor and Dimitrios Soudris
Information 2024, 15(8), 480; https://doi.org/10.3390/info15080480 - 13 Aug 2024
Viewed by 1554
Abstract
Today, video analytics are becoming extremely popular due to the increasing need for extracting valuable information from videos available in public sharing services through camera-driven streams in IoT environments. To avoid data communication overheads, a common practice is to have computation close to [...] Read more.
Today, video analytics are becoming extremely popular due to the increasing need for extracting valuable information from videos available in public sharing services through camera-driven streams in IoT environments. To avoid data communication overheads, a common practice is to have computation close to the data source rather than Cloud offloading. Typically, video analytics are organized as separate tasks, each with different resource requirements (e.g., computational- vs. memory-intensive tasks). The serverless computing paradigm forms a promising approach for mapping such types of applications, enabling fine-grained deployment and management in a per-function, and per-device manner. However, there is a tradeoff between QoS adherence and resource efficiency. Performance variability due to function co-location and prevalent resource heterogeneity make maintaining QoS challenging. At the same time, resource efficiency is essential to avoid waste, such as unnecessary power consumption and CPU reservation. In this paper, we present Darly, a QoS-, interference- and heterogeneity-aware Deep Reinforcement Learning-based Scheduler for serverless video analytics deployments on top of distributed Edge nodes. The proposed framework incorporates a DRL agent that exploits performance counters to identify the levels of interference and the degree of heterogeneity in the underlying Edge infrastructure. It combines this information along with user-defined QoS requirements to improve resource allocations by deciding the placement, migration, or horizontal scaling of serverless functions. We evaluate Darly on a typical Edge cluster with a real-world workflow composed of commonly used serverless video analytics functions and show that our approach achieves efficient scheduling of the deployed functions by satisfying multiple QoS requirements for up to 91.6% (Profile-based) of the total requests under dynamic conditions. Full article
Show Figures

Figure 1

21 pages, 2408 KiB  
Article
BS-GeoEduNet 1.0: Blockchain-Assisted Serverless Framework for Geospatial Educational Information Networks
by Meenakshi Kandpal, Veena Goswami, Yash Pritwani, Rabindra K. Barik and Manob Jyoti Saikia
ISPRS Int. J. Geo-Inf. 2024, 13(8), 274; https://doi.org/10.3390/ijgi13080274 - 1 Aug 2024
Cited by 1 | Viewed by 1216
Abstract
The integration of a blockchain-supported serverless computing framework enhances the performance of computational and analytical operations and the provision of services within internet-based data centers, rather than depending on independent desktop computers. Therefore, in the present research paper, a blockchain-assisted serverless framework for [...] Read more.
The integration of a blockchain-supported serverless computing framework enhances the performance of computational and analytical operations and the provision of services within internet-based data centers, rather than depending on independent desktop computers. Therefore, in the present research paper, a blockchain-assisted serverless framework for geospatial data visualizations is implemented. The proposed BS-GeoEduNet 1.0 framework leverages the capabilities of AWS Lambda for serverless computing, providing a reliable and efficient solution for data storage, analysis, and distribution. The proposed framework incorporates AES encryption, decryption layers, and queue implementation to achieve a scalable approach for handling larger files. It implements a queueing mechanism during the heavier input/output processes of file processing by using Apache KAFKA, enabling the system to handle large volumes of data efficiently. It concludes with the visualization of all geospatial-enabled NIT/IIT details on the proposed framework, which utilizes the data fetched from MongoDB. The experimental findings validate the reliability and efficiency of the proposed system, demonstrating its efficacy in geospatial data storage and processing. Full article
Show Figures

Figure 1

26 pages, 664 KiB  
Article
Comparison of Reinforcement Learning Algorithms for Edge Computing Applications Deployed by Serverless Technologies
by Mauro Femminella and Gianluca Reali
Algorithms 2024, 17(8), 320; https://doi.org/10.3390/a17080320 - 23 Jul 2024
Cited by 1 | Viewed by 1524
Abstract
Edge computing is one of the technological areas currently considered among the most promising for the implementation of many types of applications. In particular, IoT-type applications can benefit from reduced latency and better data protection. However, the price typically to be paid in [...] Read more.
Edge computing is one of the technological areas currently considered among the most promising for the implementation of many types of applications. In particular, IoT-type applications can benefit from reduced latency and better data protection. However, the price typically to be paid in order to benefit from the offered opportunities includes the need to use a reduced amount of resources compared to the traditional cloud environment. Indeed, it may happen that only one computing node can be used. In these situations, it is essential to introduce computing and memory resource management techniques that allow resources to be optimized while still guaranteeing acceptable performance, in terms of latency and probability of rejection. For this reason, the use of serverless technologies, managed by reinforcement learning algorithms, is an active area of research. In this paper, we explore and compare the performance of some machine learning algorithms for managing horizontal function autoscaling in a serverless edge computing system. In particular, we make use of open serverless technologies, deployed in a Kubernetes cluster, to experimentally fine-tune the performance of the algorithms. The results obtained allow both the understanding of some basic mechanisms typical of edge computing systems and related technologies that determine system performance and the guiding of configuration choices for systems in operation. Full article
(This article belongs to the Special Issue Machine Learning for Edge Computing)
Show Figures

Figure 1

21 pages, 1402 KiB  
Article
Latency-Sensitive Function Placement among Heterogeneous Nodes in Serverless Computing
by Urooba Shahid, Ghufran Ahmed, Shahbaz Siddiqui, Junaid Shuja and Abdullateef Oluwagbemiga Balogun
Sensors 2024, 24(13), 4195; https://doi.org/10.3390/s24134195 - 27 Jun 2024
Cited by 1 | Viewed by 1261
Abstract
Function as a Service (FaaS) is highly beneficial to smart city infrastructure due to its flexibility, efficiency, and adaptability, specifically for integration in the digital landscape. FaaS has serverless setup, which means that an organization no longer has to worry about specific infrastructure [...] Read more.
Function as a Service (FaaS) is highly beneficial to smart city infrastructure due to its flexibility, efficiency, and adaptability, specifically for integration in the digital landscape. FaaS has serverless setup, which means that an organization no longer has to worry about specific infrastructure management tasks; the developers can focus on how to deploy and create code efficiently. Since FaaS aligns well with the IoT, it easily integrates with IoT devices, thereby making it possible to perform event-based actions and real-time computations. In our research, we offer an exclusive likelihood-based model of adaptive machine learning for identifying the right place of function. We employ the XGBoost regressor to estimate the execution time for each function and utilize the decision tree regressor to predict network latency. By encompassing factors like network delay, arrival computation, and emphasis on resources, the machine learning model eases the selection process of a placement. In replication, we use Docker containers, focusing on serverless node type, serverless node variety, function location, deadlines, and edge-cloud topology. Thus, the primary objectives are to address deadlines and enhance the use of any resource, and from this, we can see that effective utilization of resources leads to enhanced deadline compliance. Full article
Show Figures

Figure 1

19 pages, 1008 KiB  
Article
On the Analysis of Inter-Relationship between Auto-Scaling Policy and QoS of FaaS Workloads
by Sara Hong, Yeeun Kim, Jaehyun Nam and Seongmin Kim
Sensors 2024, 24(12), 3774; https://doi.org/10.3390/s24123774 - 10 Jun 2024
Viewed by 1147
Abstract
A recent development in cloud computing has introduced serverless technology, enabling the convenient and flexible management of cloud-native applications. Typically, the Function-as-a-Service (FaaS) solutions rely on serverless backend solutions, such as Kubernetes (K8s) and Knative, to leverage the advantages of resource management for [...] Read more.
A recent development in cloud computing has introduced serverless technology, enabling the convenient and flexible management of cloud-native applications. Typically, the Function-as-a-Service (FaaS) solutions rely on serverless backend solutions, such as Kubernetes (K8s) and Knative, to leverage the advantages of resource management for underlying containerized contexts, including auto-scaling and pod scheduling. To take the advantages, recent cloud service providers also deploy self-hosted serverless services by facilitating their on-premise hosted FaaS platforms rather than relying on commercial public cloud offerings. However, the lack of standardized guidelines on K8s abstraction to fairly schedule and allocate resources on auto-scaling configuration options for such on-premise hosting environment in serverless computing poses challenges in meeting the service level objectives (SLOs) of diverse workloads. This study fills this gap by exploring the relationship between auto-scaling behavior and the performance of FaaS workloads depending on scaling-related configurations in K8s. Based on comprehensive measurement studies, we derived the logic as to which workload should be applied and with what type of scaling configurations, such as base metric, threshold to maximize the difference in latency SLO, and number of responses. Additionally, we propose a methodology to assess the scaling efficiency of the related K8s configurations regarding the quality of service (QoS) of FaaS workloads. Full article
(This article belongs to the Special Issue Edge Computing in Internet of Things Applications)
Show Figures

Figure 1

20 pages, 1744 KiB  
Article
P2P Federated Learning Based on Node Segmentation with Privacy Protection for IoV
by Jia Zhao, Yating Guo, Bokai Yang and Yanchun Wang
Electronics 2024, 13(12), 2276; https://doi.org/10.3390/electronics13122276 - 10 Jun 2024
Viewed by 782
Abstract
The current usage of federated learning in applications relies on the existence of servers. To address the inability to conduct federated learning for IoV (Internet of Vehicles) applications in serverless areas, a P2P (peer-to-peer) architecture for federated learning is proposed in this paper. [...] Read more.
The current usage of federated learning in applications relies on the existence of servers. To address the inability to conduct federated learning for IoV (Internet of Vehicles) applications in serverless areas, a P2P (peer-to-peer) architecture for federated learning is proposed in this paper. Following node segmentation based on limited subgraph diameters, an edge aggregation mode is employed to propagate models inwardly, and a mode for propagating the model inward to the C-node (center node) while aggregating is proposed. Simultaneously, a personalized differential privacy scheme was designed under this architecture. Through experimentation and verification, the approach proposed in this paper demonstrates the combination of both security and usability. Full article
(This article belongs to the Special Issue Network Security Management in Heterogeneous Networks)
Show Figures

Figure 1

27 pages, 4362 KiB  
Article
Enhancing Resource Utilization Efficiency in Serverless Education: A Stateful Approach with Rofuse
by Xinxi Lu, Nan Li, Lijuan Yuan and Juan Zhang
Electronics 2024, 13(11), 2168; https://doi.org/10.3390/electronics13112168 - 2 Jun 2024
Viewed by 650
Abstract
Traditional container orchestration platforms often suffer from resource wastage in educational settings, and stateless serverless services face challenges in maintaining container state persistence during the teaching process. To address these issues, we propose a stateful serverless mechanism based on Containerd and Kubernetes, focusing [...] Read more.
Traditional container orchestration platforms often suffer from resource wastage in educational settings, and stateless serverless services face challenges in maintaining container state persistence during the teaching process. To address these issues, we propose a stateful serverless mechanism based on Containerd and Kubernetes, focusing on optimizing the startup process for container groups. We first implement a checkpoint/restore framework for container states, providing fundamental support for managing stateful containers. Building on this foundation, we propose the concept of “container groups” to address the challenges in educational practice scenarios characterized by a large number of similar containers on the same node. We then propose the Rofuse optimization mechanism, which employs delayed loading and block-level deduplication techniques. This enables containers within the same group to reuse locally cached file system data at the block level, thus reducing container restart latency. Experimental results demonstrate that our stateful serverless mechanism can run smoothly in typical educational practice scenarios, and Rofuse reduces the container restart time by approximately 50% compared to existing solutions. This research provides valuable exploration for serverless practices in the education domain, contributing new perspectives and methods to improve resource utilization efficiency and flexibility in teaching environments. Full article
(This article belongs to the Special Issue Machine Intelligent Information and Efficient System)
Show Figures

Figure 1

17 pages, 1166 KiB  
Article
Resource Allocation and Pricing in Energy Harvesting Serverless Computing Internet of Things Networks
by Yunqi Li and Changlin Yang
Information 2024, 15(5), 250; https://doi.org/10.3390/info15050250 - 29 Apr 2024
Cited by 1 | Viewed by 1445
Abstract
This paper considers a resource allocation problem involving servers and mobile users (MUs) operating in a serverless edge computing (SEC)-enabled Internet of Things (IoT) network. Each MU has a fixed budget, and each server is powered by the grid and has energy harvesting [...] Read more.
This paper considers a resource allocation problem involving servers and mobile users (MUs) operating in a serverless edge computing (SEC)-enabled Internet of Things (IoT) network. Each MU has a fixed budget, and each server is powered by the grid and has energy harvesting (EH) capability. Our objective is to maximize the revenue of the operator that operates the said servers and the number of resources purchased by the MUs. We propose a Stackelberg game approach, where servers and MUs act as leaders and followers, respectively. We prove the existence of a Stackelberg game equilibrium and develop an iterative algorithm to determine the final game equilibrium price. Simulation results show that the proposed scheme is efficient in terms of the SEC’s profit and MU’s demand. Moreover, both MUs and SECs gain benefits from renewable energy. Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing)
Show Figures

Figure 1

23 pages, 812 KiB  
Review
Smart Healthcare System in Server-Less Environment: Concepts, Architecture, Challenges, Future Directions
by Rup Kumar Deka, Akash Ghosh, Sandeep Nanda, Rabindra Kumar Barik and Manob Jyoti Saikia
Computers 2024, 13(4), 105; https://doi.org/10.3390/computers13040105 - 19 Apr 2024
Viewed by 2099
Abstract
Server-less computing is a novel cloud-based paradigm that is gaining popularity today for running widely distributed applications. When it comes to server-less computing, features are available via subscription. Server-less computing is advantageous to developers since it lets them install and run programs without [...] Read more.
Server-less computing is a novel cloud-based paradigm that is gaining popularity today for running widely distributed applications. When it comes to server-less computing, features are available via subscription. Server-less computing is advantageous to developers since it lets them install and run programs without worrying about the underlying architecture. A common choice for code deployment these days, server-less design is preferred because of its independence, affordability, and simplicity. The healthcare industry is one excellent setting in which server-less computing can shine. In the existing literature, we can see that fewer studies have been put forward or explored in the area of server-less computing with respect to smart healthcare systems. A cloud infrastructure can help deliver services to both users and healthcare providers. The main aim of our research is to cover various topics on the implementation of server-less computing in the current healthcare sector. We have carried out our studies, which are adopted in the healthcare domain and reported on an in-depth analysis in this article. We have listed various issues and challenges, and various recommendations to adopt server-less computing in the healthcare sector. Full article
Show Figures

Figure 1

27 pages, 1086 KiB  
Article
Implementing Internet of Things Service Platforms with Network Function Virtualization Serverless Technologies
by Mauro Femminella and Gianluca Reali
Future Internet 2024, 16(3), 91; https://doi.org/10.3390/fi16030091 - 8 Mar 2024
Cited by 1 | Viewed by 2201
Abstract
The need for adaptivity and scalability in telecommunication systems has led to the introduction of a software-based approach to networking, in which network functions are virtualized and implemented in software modules, based on network function virtualization (NFV) technologies. The growing demand for low [...] Read more.
The need for adaptivity and scalability in telecommunication systems has led to the introduction of a software-based approach to networking, in which network functions are virtualized and implemented in software modules, based on network function virtualization (NFV) technologies. The growing demand for low latency, efficiency, flexibility and security has placed some limitations on the adoption of these technologies, due to some problems of traditional virtualization solutions. However, the introduction of lightweight virtualization approaches is paving the way for new and better infrastructures for implementing network functions. This article discusses these new virtualization solutions and shows a proposal, based on serverless computing, that uses them to implement container-based virtualized network functions for the delivery of advanced Internet of Things (IoT) services. It includes open source software components to implement both the virtualization layer, implemented through Firecracker, and the runtime environment, based on Kata containers. A set of experiments shows that the proposed approach is fast, in order to boost new network functions, and more efficient than some baseline solutions, with minimal resource footprint. Therefore, it is an excellent candidate to implement NFV functions in the edge deployment of serverless services for the IoT. Full article
(This article belongs to the Special Issue Applications of Wireless Sensor Networks and Internet of Things)
Show Figures

Figure 1

Back to TopTop