Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Serverless Computing

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

Optimizing Server-less Computing: Strategies for Resource Allocation and Performance Enhancement in

Cloud Operating Systems

Muhammad Asif Chishti1 , Mehwish Iqbal2 , Raees Abbas3 , Muhammad Wajid Khan4

Department of Computer Science, University of Engineering & Technology, Lahore, Pakistan.

{12023MSCS01, 22023MSCS09, 32023MSCS14, 42023MSCS30 }@student.uet.edu.pk

Abstract— Serverless computing, a disruptive paradigm of serverless system performance. In particular, we need to find
cloud computing, allows developers to simplify the application a mechanism for allocating resources that optimizes
development process by outsourcing infrastructure effectiveness to increase the attractiveness of serverless
management to third-party service providers. Serverless systems. In addition to academic contributions, the goal is to
computing utilizes cloud technology, providing developers with produce tangible insight that will appeal to real-world
a streamlined approach to application development and developers, system administrators, and cloud architects.
deployment. This paper conducts an extensive literature review
and survey, delving into existing research on serverless The primary objective of this research is the detailed
computing, resource allocation, and performance optimization. investigation of resource allocation mechanisms and their
Optimizing serverless computing requires a comprehensive implications for the performance of serverless computing
strategy incorporating resource allocation and performance within cloud operating systems. The study focuses on
enhancement methods. Auto-scaling mechanisms, predictive achieving a resource allocation mechanism and how
analytics, parallelism, function placement optimization, resources are allocated for serverless computer use, with
caching mechanisms, and specialized hardware accelerators particular emphasis being placed on the automated
contribute to notable gains in resource utilization and provisioning of computing devices according to demand.
application performance. The study explores methodologies
This involves a thorough examination of the underlying
employed in prior research, scrutinizes various serverless
platforms, and identifies resource allocation strategies.
mechanisms and processes that govern resource allocation in
a serverless environment[5].
Keywords—Serverless Computing, Resource Allocation Beyond theoretical exploration, this research has a
Strategies, Serverless Performance, Resource Efficiency, Cloud significant impact. This will be achieved by translating the
Operating Systems. findings into practical experiences for developers, system
1. INTRODUCTION administrators, and cloud architects that enable them to make
informed choices in realistic scenarios in real life. The
ultimate aim is to optimize resource utilization and improve
The concept of a "function" has been central to serverless the performance of all serverless applications running on
computing [1]. Let's just call this function a small, cloud operating systems.
specialized piece of code that is designed to handle special
tasks like sorting or image processing. The serverless system By achieving these objectives this research contributes
handles all the intricate details of your operation to ensure significantly to a broader understanding of serverless
that it runs smoothly when you invoke a function. It's like computing. It establishes itself as an essential element to
having a team of experts in charge of your technological promote the efficiency and attractiveness of serverless
aspects, which allows you to concentrate entirely on your computing for all stakeholders concerned, through addressing
artistic work. critical aspects of resource allocation and performance in
cloud operating systems.
The concept of serverless computing, which has earned
considerable attention from technology companies and This paper begins with Section 2, which provides an
researchers alike, didn't go unnoticed. The introduction of extensive review of relevant literature. This literature review
serverless platforms by major players like Amazon[2]. , serves as the foundation for the study, establishing the
IBM, Microsoft, and Google have made the development theoretical framework and contextualizing the research
process easier for thousands of developers[3]. In addition, the objectives.
scope of what serverless technology is capable of has been Moving forward, Section 3 outlines the methodology
extended by open-source projects. employed in the study. This section offers a detailed
However, the promise of serverless computing comes explanation of the systematic approach used to address the
with its own set of challenges, particularly in the area of research objectives. By providing transparency in the
complex tasks[4]. In the pursuit of efficiency, the research research process, the methodology section ensures the study's
community has observed a discrepancy during tests on credibility and replicability.
existing serverless platforms the cumulative time spent on Section 4 presents the research results, showcasing the
managing resources exceeded that of individual function outcomes of the data analysis conducted. This section
executions. It underlines the need for a specific focus on provides a comprehensive overview of the collected and
exploring resource allocation strategies, which are at the analyzed data, allowing readers to understand the study's
forefront in serverless environments. findings clearly and concisely.
This is precisely where our research takes center stage. It To consolidate the study's essence, Section 5 offers a
takes a specific approach to the area of resource allocation conclusive summary. This section highlights the key findings
strategies and specifically focuses on their direct impact on
and their implications, providing a concise overview of the theory about functional chains, which limits the scope of the
study's contributions to the field. It serves as a crucial section proposed approach was not taken into account. The
for readers who seek a quick understanding of the research assessment considered the simulation model and numerical
outcomes. studies, which may not be sufficiently able to represent the
Lastly, Section 6 outlines future research directions. This complexity of actual world scenarios.
forward-looking perspective identifies areas for further
investigation and suggests potential avenues for future In 2023 Agos Jawaddi and Ismail [7]. proposed an extensive
studies. By doing so, the paper not only contributes to the review of Auto-scaling in Server-less computing including
current research but also inspires future scholars to explore benefits, problems, and a proposed taxonomy for auto-
new dimensions in the field. scaling properties presented. The methodology described
Overall, this research paper follows a well-organized auto-scaling solutions. Its strength lies in its comprehensive
structure, starting with the literature review and theoretical approach, while it has a limited scope of the search criteria.
framework.
In 2022 Zafeiropoulos et al., [8]. presented an overview of
In Section 2, followed by the literature review in Section 3,
methodology in Section 4, findings in Section 5, a conclusive the reinforcement learning-assisted auto-scale mechanism
summary and future research directions in Section 6. This for server-less computing platforms. Its methodology is
systematic approach ensures the paper's logical flow and based on the use of reinforcement learning to optimize auto-
enhances its overall impact on the research community. scaling decisions. Its strength involved in ability to cope
with changing workloads, Moreover, its limits involved the
need for a large amount of data to train a Reinforcement
2. LITERATURE REVIEW Learning Model.

Literature Review and Survey to identify and analyze In 2023 Tran and Kim [9]. described various topics, such as
existing research papers, studies, and projects related to reinforcement learning-based algorithms, efficient server-
serverless computing, resource allocation, and performance less frameworks, and resource cost estimation methods.
optimization. Examine the methodologies employed in prior Different methodologies are followed in each of the papers,
research to address similar challenges in serverless which propose different solutions to server-less computing
computing. Survey available serverless platforms and their problems. The research's detailed analysis of the state of
resource allocation strategies. server-less computing as well as its potential future
developments gives it a unique edge. The challenge is that in
a realistic world scenario, some of the solutions proposed
2.1. Resource Allocation in Server-less Computing may not be easily implemented and further research would
have to be carried out to verify their effectiveness.
Resource allocation in server-less computing is the process
of assigning computing resources such as CPU, memory, 2.1.2. Predicted machine learning algorithms.
and storage to applications running on the cloud. The goal of
resource allocation is to optimize the utilization of resources Predictive Analytics and Machine Learning algorithms are
while ensuring that application performance and service another way to optimize resource allocation, which is based
level agreements (SLAs) are met. Resource allocation is a on predicting workload patterns and allocating resources
critical aspect of server-less computing, as it directly ahead of time. To anticipate expected demand, cloud
impacts the performance and cost-effectiveness of cloud operating systems can provide immediate resource
operating systems. There are several approaches to resource allocation concerning historical data and trends to minimize
allocation in server-less computing, including: delays and improve overall system performance. Those
algorithms are dynamic and elastic Ant Colony optimization
2.1.1. Auto-scaling load balancing, multi-agent optimization, DRLM
algorithms, etc...
One approach for optimizing resource allocation is using an
auto-scaling mechanism, which allocates resources In 2021 Nawrocki and Osypanka, [10]. used machine
dynamically based on workload demands. Auto-scaling can learning to predict the demand from clouds for resources.
ensure efficient resource utilization while maintaining This methodology analyzes the data on past resource use
optimal performance through the automatic allocation and and processes it using a machine learning model, which is
redeployment of resources in response to different workload based on the input information, to predict demand for
levels. resources going forward. It was a method that has the
potential to improve resource allocation and reduce costs,
In 2023 Ship et al., [6]. proposed an optimization approach while accuracy in its predictions depending on the quality
based on the revised SCLP Simplex algorithm and fluid and quantity of data submitted may be weak.
politics. Auto-scaling systems using server-less computing
platforms. This approach involved developing an extensive In 2021 Wei and Gao, [11]. describes several machine
modeling framework and carrying out a numerical study. It learning models and algorithms used for various prediction
significantly reduced the average response time and scenarios. Different methodologies, e.g. to split the data set
rejection rates, ranging from 15 % to over 300% in most and create features, analyze past resource usage with sliding
cases. It sets out a comprehensive modeling framework and windows and ensembles through methods like Grid search
an in-depth analysis of the proposal. The use of queue
or random searches for optimization of model parameters terminal equipment mobility on function placement
have been applied. The approach has several benefits decisions.
including improved performance when generating
predictions and handling large sets of data, but it also has In 2018 Elgamal [16]. proposed a practical algorithm for
drawbacks like the potential for overcompensation or access exploring various combinations of functional fusion
to high computational resources. placement solutions. The study shows that the price of
server-less applications is affected by three main factors:
In 2021 S. Kurz [12]. explored server-less cloud computing function fusion, positioning, and configuration of memory.
for double machine learning. It's capable of getting fast on- The algorithm finds solutions that optimize the price by
demand estimations without additional cloud maintenance more than 35%-57% with only a 5%-15% increase in
effort. For the estimation of 2 Machine Learning Models latency. Weaknesses include the fact that a fusion of
using the Server-less Amazon Web Services Lambda functions can result in an application's flexibility and
Platform, and for demonstrating its effectiveness by a case accessibility being reduced. The paper calls for more
study relating to estimating time and costs, we will make use research to be done in the field of generating functional code
of Python prototype implementation Double-ML-Server- and dynamic switching between fused and non-fused
less. it comes with the benefits of a pay-per-request pricing deployment.
model.
In 2023 Li et al., [17] proposed an adaptive approach called
In 2020 Schuler et al., [13]. Proposed a reinforcement Fire-Face. The methodology also contains a prediction
learning approach to optimize request-based auto-scaling for module to extract internal function details and estimation of
server-less applications, drawing attention to the impact of execution time, as well as a decision module using the
predefined concurrency levels on application performance. algorithm Adaptive Particle Swarm Optimization and
The performance compared to default settings is improved Genetic Algorithm Operator APSOSOGA which is used for
in the preliminary results. selecting optimum configuration plans. The ability to
minimize financial costs whilst meeting service-level
2.1.3. Optimizing the placement of functions objectives is a key strength of the approach. The weak spot
is that the information provided does not provide concrete
In addition, the concept of optimizing the placement of evaluation criteria or quantitative results.
functions in server-less computing is an essential factor for
resource allocation. Cloud service providers may reduce 2.2. Performance Enhancement Strategies
network latency and enhance total application performance
with strategic placement of functions near data sources or Apart from allocating resources, other ways may be
end users. To determine the optimal location of functions in implemented to improve the efficiency of server-less
the cloud infrastructure, this approach requires that factors computing inside cloud operating systems. Using
such as data locality, network proximity, and user concurrency and parallelism to optimize function execution
geographical situation be taken into account. Several factors is one such tactic. Through task slicing and concurrent
need to be considered when determining the optimal execution, cloud providers may shorten execution times and
location for functions in the cloud infrastructure: Data increase total system throughput. Furthermore, by keeping
Locality, Network Proximity, User Geographical Situation. frequently requested data closer to computational resources,
By considering the factors mentioned previously, providers caching strategies may be used to improve performance.
can achieve the following benefits: Reduced Latency, Cloud providers can reduce data retrieval latency and
Enhanced Scalability, and Improved Resource Utilization. enhance application responsiveness by adopting caching
While optimizing function placement offers numerous solutions at several levels within the server-less architecture,
benefits, some challenges and considerations need to be such as function-level caching and edge caching.
addressed: Dynamic Workload Patterns, Cost Management, Additionally, the performance of server-less applications
and Security Implications. that need complex computing tasks may be greatly improved
by utilizing specialized hardware accelerators like GPUs
In 2019 Nima Mahmoudi et al., [14]. proposed a new (Graphics Processing Units) and FPGAs (Field-
algorithm to place functions in containers that use the Programmable Gate Arrays). Through the offloading of
Tensor-flow network, making it more efficient than existing particular workloads to hardware accelerators, cloud
placement algorithms that are currently employed by FaaS providers may increase performance significantly without
providers. The analysis shows that the proposed algorithm is increasing costs. Monitoring and improving the performance
more effective than any of the current algorithms. of cloud-based apps is known as performance management
in server-less computing. Ensuring applications satisfy
In 2022 Xu and Sun [15]. determined the optimal placement service level agreements (SLAs) while reducing costs and
of functions in terminal devices, local edge nodes, or remote optimizing resource use is the aim of performance
cloud servers, an adaptive function placement algorithm management. There are several approaches to performance
based on the Markov Decision Process. It has several management in server-less computing, including monitoring
advantages, among them being able to meet the needs of and analytics, auto-scaling, and load balancing
different indexes and having established foundations for
stateless function placement problems in server-less In 2023 Wen et al., [18]. discussed Super-Flow, a specific
computing. The algorithm, however, does not predict the approach for performance testing that consists of an
requirements of users to be met or study the impact of accuracy check and stability check specially designed for
server-less computing. Super-Flow's ability to generate test be complicated and error-prone [22]. A serverless service
results with 97.22% accuracy, a significant increase over can be viewed as a FaaS in which the developer can
existing techniques such as PT4cloud and METIOR, has develop, run, and manage functions without the trouble of
been of great importance. Limits that there is no reference in building and maintaining the infrastructure, The
the document to how PT4Cloud and Metior use special affordability of serverless services is mainly due to the
methodologies or criteria for comparing them with Super- reduced costs of the providers. There is a main reason for
Flow. the reduced costs, Resource multiplexing leads to higher
utilization of available resources. For example, consider the
In 2023 Wen et al., [19]. investigated the differences in case in which an application has one request every minute
server-less functionality performance from an empirical and it takes milliseconds to complete each request. In this
study an assessment of the performance of 65 server-less case, the mean CPU usage is very low. If the application is
functions which were performed several times using a low deployed to a dedicated machine, then this is highly
number of repeats, as well as an analysis of their reliability. inefficient. [23].
The strength of this study is that it brings attention to a very
important issue in server-less computing, providing insight
that can improve testing techniques. Weaknesses are the
3. METHODOLOGY
lack of detailed performance metrics used for measuring
variance, or special papers that appear in the literature
review, to be specified in the information provided. A comprehensive approach is used for collecting papers on
the topic Optimizing Serverless Computing: Strategies to
allocate resources and improve the performance of cloud
In 2018 Bardsley et al., [20] discussed an investigation of
operating systems. To ensure that a large range of relevant
the performance profile for a server-less architecture based
literature is collected, the method for collecting papers relies
on Amazon Web Services Lambda, focusing on Latency and
on using various databases and search strategies.
Performance Optimization. This study provides evidence
from a range of sources such as a complete analysis of
Lambda efficiency, actual world case studies, and industry 3.1. Database Selection
best practices. The study is of great value as it offers insight
into how to deploy complex systems on a server-less The selection of databases is crucial in obtaining high-
architecture and lays down various optimization strategies. quality and relevant research papers. Databases such as
Weaknesses include a lack of detailed information on the IEEE Xplore, ACM Digital Library, Science-Direct,
methodology used. Springer-Link, Google Scholar, Google Semantic, and Arxiv
are commonly used for accessing scholarly articles related to
In 2022 Mahmoudi and Khazaei, [21]. Described a proposed cloud computing, server-less computing, resource allocation,
analytical model of performance on server-less computing and performance enhancement in cloud operating systems
platforms that allows for calculation of performance are given in below Table1.
parameters, such as average response times and probability
of cold start in advance of physical deployment. The Table1 Database
methodology involved extensive experimentation on AWS Database links
Lambda and assumes no unrealistic restrictions for increased IEEE Xplore https://ieeexplore.ieee.org/
fidelity. its strength is the model's flexibility and potential ACM Digital https://dl.acm.org/
to optimize infrastructure for each workload, resulting in a Library
reduction of energy consumption and CO2 emissions. The Science-Direct https://www.sciencedirect.com/
problem is that the information provided does not indicate Springer-Link https://link.springer.com/
any particular workload or configuration of the system to be Google Scholar https://scholar.google.com/
analyzed. Google semantic https://www.semanticscholar.org/
Arxiv https://arxiv.org/
2.3. Summary

To sum up, optimizing server-less computing requires a


thorough strategy that includes cloud operating system Selected databases according to retrieval of papers from
performance improvement methods and resource allocation them. As shown in the below chat
strategies. Cloud providers can achieve notable gains in
resource utilization and application performance by utilizing
auto-scaling mechanisms, predictive analytics, parallelism,
function placement optimization, caching mechanisms, and 3.2. Search Strategy
specialized hardware accelerators. The next section is about
the detailed methodology of the paper.

Serverless computing is a new way of doing things in the


cloud. It's becoming popular for creating different kinds of
software. With serverless, developers can focus on the
specific tasks of their application, freeing them from the
hassle of managing the underlying infrastructure, which can
The search strategy involves using a combination of
keywords and abstracts to refine the search results. Value
Keywords such as “server-less computing,” “resource
allocation,” “performance enhancement,” “cloud operating Arxiv 4
systems,” “optimization strategies,” and “cloud computing” Ieee explore 6
can be used in various combinations to yield comprehensive Science direct 4
results. Additionally, using abstract is read to determine the Research square 1
research article. Google semantic 1
0.5 1.5 2.5 3.5 4.5 5.5 6.5
3.2.1. Inclusion criteria Google Re- Science Ieee Arxiv
seman- search direct explore
In selecting papers, inclusion criteria should be tic square
established to ensure that the collected literature is Val 1 1 4 6 4
relevant to the topic. Inclusion criteria may include ue
publication date (e.g., within the last 5-10 years), peer-
reviewed articles, relevance to server-less computing Value
and cloud operating systems, and focus on resource
and security implications. Understanding testing techniques,
allocation and performance enhancement strategies. As
latency optimization, and performance model analysis on
shown in Table 2
serverless computing platforms provides insight into the
strategies for improving performance, which are presented
Table 2 inclusion criteria by Wen et al[18] [19]., Bardsley et al [20]., Mahmoudi &
and Khazaei [21].
Publication Inclusion criteria
Field Computer science and
computer engineering 5. CONCLUSION
Year 2018 to 2023
Related to Server-less computing In conclusion, this paper provides a comprehensive
examination of resource allocation and performance
enhancement strategies for serverless computing. The
3.2.2. Exclusion Criteria: findings indicate the effectiveness of auto-scaling
Similarly, exclusion criteria should be defined to filter out mechanisms, predictive machine learning algorithms, and
irrelevant literature. Exclusion criteria may include non- optimized function placement in improving resource
peer-reviewed sources, outdated publications, non-English
utilization and application performance. The holistic
language articles, and articles not directly related to
understanding of the optimal use of serverless computing is
serverless computing or cloud operating systems. as shown
driven by performance enhancement strategies, such as
in Table 3.
testing approaches and analytical models. With serverless
computing becoming a reality, it will be crucial for
Table 3 Exclusion Criteria
developers, system administrators, and cloud architects to
address the challenges and leverage innovative strategies.
Publication Exclusion Criteria
This paper contributes valuable insights to the field, offering
Not published In English a foundation for further research and practical applications
Without Accessible Full text in the dynamic landscape of serverless computing.
Not formally Peer reviewed
duplicates Other previous publication 6. FUTURE WORK
In looking into the future of serverless computing, we
4. FINDINGS identify crucial areas for research that promise
improvements in efficiency, security, and flexibility. A key
Findings from the literature review and survey underlined focus is to explore serverless deployment strategies for
the wide range of approaches for resource allocation and multiple cloud providers so that not only interoperability but
increased performance in serverless computing. Significant also resilience and redundancy of critical applications can be
improvement in the response time and the rejection rate is ensured. It seems that this research is about to give
demonstrated through automatic scaling mechanisms such as organizations a competitive edge, enabling flexibility and
those proposed by Ship et al.[6]. reliability in deployment. Moreover, the intersection of
The benefits of predicting machine learning algorithms, serverless technologies in Internet of Things scenarios
which Wei and Gao[11] discuss in detail, include increased becomes an attractive frontier. This research could unlock
performance although there are problems like innovative solutions to optimize serverless architectures for
overcompensation and resource availability that need to be high-performance and scalable applications of the Internet
addressed. The possibilities for reduced latency, increased of Things, to address problems related to scalability, data
capacity, and improved resource utilization are offered by processing, as well as real-time requirements on Internet of
optimizing the placement of functions investigated in Things deployment.
Mahmoudi et al [14]., Xu and Sun [15] & Elgamal [16].
Nevertheless, solutions must be found to address the References:
challenges of flexible workload patterns, cost management,
ieeexplore.ieee.org/abstract/document/8737391. Accessed
[1] Chai, Wesley. “What Is Cloud Computing? Everything 18 Nov. 2023.
You Need to Know.” TechTarget, 2021, [13] Schuler, Lucia, et al. “AI-Based Resource Allocation:
www.techtarget.com/searchcloudcomputing/definition/cloud Reinforcement Learning for Adaptive Auto-Scaling in
-computing. Serverless Environments.” NASA ADS, 1 May 2020,
[2] Graham, Simon. “Serverless Computing with Amazon ui.adsabs.harvard.edu/abs/2020arXiv200514410S/abstract.
Web Services.” Www.theseus.fi, 2022, Accessed 18 Nov. 2023.
www.theseus.fi/handle/10024/752387. [14] Nima Mahmoudi, et al. “Optimizing Serverless
[3] Kelly, Daniel, et al. Serverless Computing: Behind the Computing: Introducing an Adaptive Function Placement
Scenes of Major Platforms. 2020, pp. 304–312. Algorithm.” Nima’s Webpage, 19 Apr. 2020, nima-
[4]Sadaqat, Mubashra, et al. “Serverless Computing: A dev.com/publication/mahmoudi-2019-optimizing/. Accessed
Multivocal Literature Review.” 13, 2018, 18 Nov. 2023.https://doi/pdf/10.5555/3370272.3370294
hiof.brage.unit.no/hiof-xmlui/handle/11250/2577600. [15] Xu, Donghong, and Zhongbin Sun. “An Adaptive
Accessed 12 Sept. 2023. Function Placement in Serverless Computing.” Cluster
[5] Saha, Aakanksha, and Sonika Jindal. EMARS: Efficient Computing, 30 Jan. 2022, https://doi.org/10.1007/s10586-
Management and Allocation of Resources in Serverless. 021-03506-x. Accessed 20 May 2022.
2018, pp. 827–830. [16] Elgamal, Tarek. Costless: Optimizing Cost of
[6] Ship, Harold, et al. “Optimizing Simultaneous Auto- Serverless Computing through Function Fusion and
scaling for Server-less Cloud Computing.” Semantic Placement. Oct. 2018, pp. 300–312.
Scholar, 29 Oct. 2023, DOI: 10.1109/SEC.2018.00029
www.semanticscholar.org/paper/Optimizing-simultaneous- [17] Li, Ming, et al. “FireFace: Leveraging Internal Function
autoscaling-for-serverless-Ship-Shindin/ Features for Configuration of Functions on Serverless Edge
820a4913f64365ca6fadd94791170e426c543d2d, Platforms.” Sensors, vol. 23, no. 18, 12 Sept. 2023, pp.
https://doi.org/10.48550/ARXIV.2310.19013. 7829–7829,
[7] Agos Jawaddi, Siti Nuraishah, and Azlan Ismail. www.ncbi.nlm.nih.gov/pmc/articles/PMC10535806/,
“Autoscaling in Serverless Computing: Taxonomy and Open https://doi.org/10.3390/s23187829. Accessed 18 Nov. 2023.
Challenges.” Www.researchsquare.com, 12 May 2023, [18] Wen, Jinfeng, et al. “SuperFlow: Performance Testing
www.researchsquare.com/article/rs-2897886/v1, for Serverless Computing.” ArXiv.org, 2 June 2023,
https://doi.org/10.21203/rs.3.rs-2897886/v1. export.arxiv.org/abs/2306.01620. Accessed 18 Nov. 2023.
[8] Zafeiropoulos, Anastasios, et al. “Reinforcement [19] Wen, Jinfeng, et al. “Revisiting the Performance of
Learning-Assisted Autoscaling Mechanisms for Serverless Serverless Computing: An Analysis of Variance.”
Computing Platforms.” Simulation Modelling Practice and ArXiv.org, 7 May 2023, arxiv.org/abs/2305.04309. Accessed
Theory, vol. 116, Apr. 2022, p. 102461, 18 Nov. 2023.
https://doi.org/10.1016/j.simpat.2021.102461. Accessed 1 [20] D. Bardsley, L. Ryan and J. Howard, "Serverless
Feb. 2022. Performance and Optimization Strategies," 2018 IEEE
[9] Tran, Minh-Ngoc, and YoungHan Kim. “Optimized International Conference on Smart Cloud (SmartCloud),
Resource Usage with Hybrid Auto-Scaling System for New York, NY, USA, 2018, pp. 19-26, doi:
Knative Serverless Edge Computing.” Future Generation 10.1109/SmartCloud.2018.00012.
Computer Systems, vol. 152, 1 Mar. 2024, pp. 304–316, [21] Mahmoudi, Nima, and Hamzeh Khazaei. “Performance
www.sciencedirect.com/science/article/abs/pii/S0167739X2 Modeling of Serverless Computing Platforms.” IEEE
3004156, https://doi.org/10.1016/j.future.2023.11.010. Transactions on Cloud Computing, vol. 10, 2022, pp. 2834–
Accessed 16 Nov. 2023. 2847, https://doi.org/10.1109/TCC.2020.3033373.
[10] Nawrocki, Piotr, and Patryk Osypanka. “Cloud
Resource Demand Prediction Using Machine Learning in [22] Jinfeng Wen, Zhenpeng Chen, Xin Jin, and Xuanzhe
the Context of QoS Parameters.” Journal of Grid Liu. 2023. Rise of the Planet of Serverless Computing: A
Computing, vol. 19, no. 2, 8 May 2021, Systematic Review. ACM Trans. Softw. Eng. Methodol. 32,
https://doi.org/10.1007/s10723-021-09561-3. 5, Article 131 (September 2023), 61 pages.
[11] Wei, Jinyue, and Ming Gao. Workload Prediction of https://doi.org/10.1145/3579643
Serverless Computing. 23 July 2021, [23] Hossein Shafiei, Ahmad Khonsari, and Payam
https://doi.org/10.1145/3480001.3480016. Accessed 18 Mousavi. 2022. Serverless Computing: A Survey of
Nov. 2023. Opportunities, Challenges, and Applications. ACM Comput.
[12] S. Kurz, Malte. “Distributed Double Machine Learning Surv. 54, 11s, Article 239 (January 2022), 32 pages.
with a Serverless Architecture.” Research.spec.org, 2021, https://doi.org/10.1145/3510611

You might also like