Optimizing Java For Serverless Applications
Optimizing Java For Serverless Applications
Optimizing Java For Serverless Applications
Subtitle (German)
Master Thesis
Submitted in partial fulfillment of the requirements for the degree of
Master of Science
Author:
Roman Ritzal, BSc
Supervisor:
FH-Prof.in Mag.a Dr.in Sigrid Schefer-Wenzl, MSc BSc
Date:
16. 08. 2020
i
Declaration of authorship:
I declare that this Master Thesis has been written by myself. I have not used any other
than the listed sources, nor have I received any unauthorized help.
I hereby certify that I have not submitted this Master Thesis in any form (to a reviewer for
assessment) either in Austria or abroad.
Furthermore, I assure that the (printed and electronic) copies I have submitted are
identical.
ii
Abstract
Recent advancements in the area of virtualization and software architecture led to the
development of the serverless service model. Serverless technology allows developers to
split their code base into small stateless chunks of software than can be deployed and
run individually. The underlying infrastructure is completely abstracted away from the
developer. Serverless technology is offered by all the market leaders in cloud computing
like Amazon, Microsoft and Google. Many companies using Java want to switch to a
serverless architecture, but unfortunately the market is dominated by script languages like
Python and NodeJS. There are a few options for developers available, but they have not
been compared to see which option gives the developer the best performance. In this
thesis these Java frameworks were tested with a load testing framework with different
scenarios to see which framework performs the best. The metrics that will be accounted
for are the execution time, the cold start time and the memory usage. To further optimize
the serverless functions, a new virtual machine for Java projects called GraalVM was
introduced. GraalVM offers many optimizations and shorter start up times for Java
projects.
iii
List of abbreviations
iv
Key terms
Software architecture
Software engineering
Serverless functions
Cloud computing
Framework
Cloud Provider
Scaling
AWS
Amazon S3
Amazon API Gateway
Amazon CloudWatch
Amazon EC2
AWS Lambda
AWS SDK
Java
GraalVM
Artilery
JSON
YAML
JDK
JVM
Load testing
Performance testing
Spring
Micronaut
Quarkus
Spring Cloud
Spring Boot
v
Table of content
ABSTRACT III
LIST OF ABBREVIATIONS IV
KEY TERMS V
TABLE OF CONTENT 1
1. INTRODUCTION 3
2. BACKGROUND 5
2.1 Cloud computing 5
2.1.1 Essential Characteristics 6
2.1.2 Benefits of cloud computing 7
2.1.3 Disadvantages of cloud computing 10
2.1.4 Service models 11
2.2 Software Architecture 13
2.2.1 Monolith 13
2.2.2 N-tier 13
2.2.3 Microservice 14
2.2.4 Serverless 14
3. TECHNICAL KNOWLEDGE 15
3.1 Serverless 15
3.1.1 Event driven architecture 16
3.1.2 Advantages of serverless 16
3.1.3 Disadvantages of serverless 17
3.1.4 Cold Start 17
3.1.5 Use Cases for serverless 18
3.2 Amazon Web Services - AWS 19
3.2.1 General 19
3.2.2 AWS Lambda 19
3.2.3 Amazon S3 20
3.2.4 Amazon API Gateway 20
3.2.5 Amazon CloudWatch 20
3.3 GraalVM 21
4. PRACTICAL SOLUTION 24
4.1 Scenario 24
4.2 Functional requirements 24
1
4.3 Metrics 26
4.3.1 Cold start up 26
4.3.2 Execution time 26
4.3.3 Memory usage 26
4.4 Selection of the cloud provider 27
4.5 Evaluation of the Java frameworks 28
4.5.1 Evaluation criteria 28
4.5.2 Spring 29
4.5.3 Spring Cloud Function 30
4.5.4 Micronaut 30
4.5.5 Quarkus 30
4.6 Performance testing 31
4.7 Architecture 32
4.8 Setting up the S3 bucket 33
4.9 Setting up the AWS Lambdas 34
4.9.1 Setting up the business logic 34
4.9.2 List of functions 35
4.9.3 Deployment 36
4.10 Performance Tests 37
4.10.1 Setting up the load testing 38
4.10.2 Getting results from CloudWatch logs 38
4.10.3 Cold start monitoring 39
4.10.4 Memory usage 40
4.10.5 Execution Time 41
4.11 Optimization with GraalVM 43
4.11.1 Creating a native image 43
4.11.2 Deployment of native image 45
4.11.3 Comparison between JVM and GraalVM 45
5. CONCLUSION 49
LIST OF FIGURES 52
LIST OF TABLES 53
2
1. INTRODUCTION
Smaller, cheaper, faster, stronger seems to be the new motto of the software industry. In
recent days the buzzword serverless keeps coming up in more and more technical
literature and articles. Entire books and forums are dedicated to this new technology. As
with all new technologies, at first, they seem somewhat intimidating and complicated from
the outside until people see the value and benefits that the new technology brings.
With a constant stream of new technologies and innovations in the software development
industry it can be hard to keep up with the newest trends. Constant improvements in the
areas of cloud computing and software development have accelerated the learning
process immensely. A technology that was state of the art a few months ago, might be
completely outdated tomorrow or next week. Sometimes the software project is even
abandoned, and bugs are left unresolved.
Many companies are now coming to term with their technical deficiencies and try to catch
up to the market leaders. These companies often operate with a very old code base were
they just keep piling on feature after feature. For these companies the change can be
very emotionally taxing and overall a big financial risk.
Still some few companies make the jump in the cold pool of cloud computing and try to
change things up. They might move their hosting to a cloud provider, and might also
invest time and money into refactoring their old codebase into a brand new microservice
or serverless architecture.
According to daxx.com there are about 7.5 million Java developers worldwide. 1 This
leaves Java only behind JavaScript and makes it the second most popular programming
language in the world. According to the Popularity of Programming Language index Java
is only trailing behind Python.2 Both JavaScripts and Pythons popularity came relatively
recent and exploded with the introduction of Machine Learning, IoT and serverless
functions. Javas fall of grace could be explained with a lack of adaption to these new
software development categories.
In recent years Java has become more active on the serverless front. For their switch to a
serverless environment in Java many companies are relying on new innovations.
Otherwise these companies would be forced to introduce a new programming language
to their development stack and would have to rewrite a lot of the already existing code
base. For that sake new frameworks like Micronaut or Quarkus have been created to help
writing serverless functions in Java.
1
https://www.daxx.com/de/blog/entwicklungstrends/anzahl-an-softwareentwicklern-deutschland-weltweit-usa
2
http://pypl.github.io/PYPL.html
3
The motivation for this thesis comes from the new improvements in the Java community.
Within the community there is a lot of confusion what the best way is to create a
serverless function and get the best performance out of it. Serverless functions have a
great emphasis to speed and memory usage because of their unique billing model. Many
Java framework are not optimized for this case. New technologies like GraalVM promise
to improve the performance of Java projects even more. Technologies like these could
help Java reclaim the throne in the popularity ranking.
The thesis is structured as follows. Chapter 2 is about the relevant background of the
thesis and talks about cloud computing and software architecture. Chapter 3 introduces
the technical knowledge that is required for the practical solution, which are topics like
Serverless, Amazon Web Services and GraalVM. Chapter 4 contains the practical
solution and talks about the way the project is set up and how the tests were executed.
Chapter 5 provides some concluding remarks.
4
2. BACKGROUND
The move towards serverless functions has been driven by rapid developments and
changes in the cloud computing landscape. These changes helped enabling new
possibilities for software design towards smaller and more stateless functionality.
Section 2.1 introduces the state of cloud computing as it is. It also talks about the
advantages and disadvantages of cloud computing and about the different types of cloud
computing.
Section 2.2 introduces options for software architectures that could be used within
software projects.
Cloud functionality is very ubiquitous. Cloud providers offer a high guarantee for the
availability of their hardware. For their object-based cloud storage service S3 (Simple
Storage Solution) AWS offers a 99,999999999% guarantee of durability and a 99.99%
guarantee of availability in their standard edition.3
Using state of the art virtualization technology, cloud providers have the capability to offer
customers a theoretical unlimited capacity of computing resources. This makes it possible
for the customer to scale horizontally or vertically without second guessing hardware
availability. Changes in the cloud computing landscape can be done manually by the
user or automatically by the platform. This mechanism is also called auto-scaling.
Autoscaling offers customers a solution for dealing with irregular demand with minimal
management effort. When using auto-scaling the customer can set thresholds for the
CPU usage or the memory usage of the resource. If the threshold is crossed the cloud
platform provisions new cloud resources to balance out the load on the cloud application.
The system automatically scales back when the capacity is not needed any more, so the
client won’t be charged for capacity that he isn’t actively using. [1] [2]
3
https://aws.amazon.com/de/s3/storage-classes/
5
2.1.1 Essential Characteristics
According to the NIST (National Institute of Standards and Technology) the cloud
computing model offers five essential characteristics to their clients.
The cloud computing model offers cloud functionality as on-demand self-service. This
gives the customer the option to provision computing capabilities as needed. With this the
customer cannot be charged for capacity that he didn’t order or want. Cloud providers
also offer a fully managed solution to the customer. With this managed solution new
computing resources will be provisioned automatically.
Another characteristic of the cloud computing model is that it has a broad network
access. This makes it possible that the platform can be accessed by a variety of thin and
thick clients.
Since the hardware resources in the cloud platform generally are not used individually,
cloud computing model uses resource pooling for provisioning new resources. All the
resources are put together and combined into a large pool of hardware resources. The
user has no idea about the used hardware or the location of the hardware.
The cloud computing model offers a system that has rapid elasticity to the customer. New
resources can be provisioned or released very quickly. To the user the theoretical
maximum of resource capabilities seems endless.
The cloud providers offer solutions to measure and optimize resource usage in the cloud
system. [3] [1]
6
2.1.2 Benefits of cloud computing
One of the biggest benefits of cloud computing is that the cloud functionality is offered as
on-demand services. Customers can provision new computing resources according to
their needs without interacting with an employee of the cloud provider. This gives them
full control over their cloud application. They have the opportunity to scale up and add
new resources when there is a shortage of capacity or scale down and remove resources
when they are being underused.
Another big advantage of cloud computing is that the resources within the cloud are not
offered individually but as a resource pool. This abstracts the details of the used
hardware away from the customers. The customers don’t know the location of the
hardware they are using. The Cloud providers can now use a variety of hardware to
create resources within the cloud. Resource pooling also abstracts the memory, storage,
network bandwidth away from the customers. Therefore, from the customer's perspective
the available resources are seemingly limitless.
Through the sheer limitless amount of available resources, cloud computing offers a very
elastic approach to provisioning and scaling. The customers can scale their system up or
down as much as they like. It is also possible to let the cloud provider manage the scaling
automatically. Resource capabilities can be purchased any time and in any quantity.
Every part of the cloud resources is measured by the cloud provider. The results are then
reported back to the customers in a transparent way. The customer is charged based on
known metrics such as storage used, number of transactions or process time. This way a
customer knows for what they are paying and why they are being charged at any point in
time.
Cloud computing removes a lot of overhead for its customers. Tasks like network
administration and hardware maintenance are completely handled by the cloud provider.
Since these tasks are completely outsourced this also can help reducing the time-to-
market of a product.
The massive amount of available hardware also offers another advantage. The entire
system becomes more failure resistant and offers higher reliability. Through that cloud
architectures can process large amounts of varying traffic. Best practices like load
balancing, auto scaling and automatic failover can help in ensuring that the system is
capable of reacting to sudden load spikes and in ensuring that the system is up and
running consistently.
7
When using cloud resources of a cloud provider the risks of the IT management and
hardware maintenance are outsourced to the cloud provider. To have a better
representation of what the customer is responsible for and what the cloud provider is
responsible for, AWS has created the shared responsibility model.
As shown in figure 1, the customer is responsible for the customer data like login data or
data from the application. He is also responsible for the application itself, the identity and
access management of the application and network and firewall configuration. AWS has
responsibility for the software on the machines and the maintenance of the hardware
infrastructure within the AWS cloud. For some parts of the model the customer and AWS
are sharing the responsibility between each other. Things that have a shared
responsibility include patching, the configuration management and the training of the
employees. For the patching AWS is responsible for patching the hardware landscape
and the operating systems installed on the hardware. The customer is responsible for
patching the guest operating system and the dependencies of the application. AWS is
only responsible for the configuration of the hardware infrastructure, while the customer is
responsible for the configuration of the guest operating system, databases and
applications. Both AWS and the customer are only responsible for the trainings of their
own employees. [4]
8
As stated in the shared responsibility model the cloud provider is responsible for the
patching of the hardware and the host operating systems. Due to having total control over
the physical infrastructure, cloud providers can manage systemwide patches or upgrades
very easily. This creates a more efficient and faster way to give customers the ability to
work with state-of-the-art infrastructure and the newest operating systems.
Cloud computing also offers the big advantage of a low barrier to entry. The customers
don’t have to buy and setup hardware themselves. The networking for the hardware is
also taken care of by the cloud provider. This advantage attracts a lot of smaller startups,
especially from the Silicon Valley. Many multibillion companies like Airbnb4 and Lyft5 had
their beginnings with an application that is deployed on a cloud platform. Traditional
hosting in a datacenter or a server room has a way higher upfront capital expenditure
than cloud computing. [1]
4
https://aws.amazon.com/de/solutions/case-studies/airbnb/
5
https://aws.amazon.com/de/solutions/case-studies/lyft/
9
2.1.3 Disadvantages of cloud computing
Cloud computing does not only offer advantages to the customer. Some of the
advantages can lead to disadvantages further down the road.
The control of all the underlying hardware infrastructure belongs to the cloud provider. For
a client who has very specific needs of the hardware components, cloud computing can
feel very restrictive. A disadvantage of cloud computing compared to a physical server
room can be the limited customizability of the used hardware components. Since you
have full control over on-premise hardware, more hardware features can be used than
with cloud computing.
One of the biggest concerns for companies that are thinking to make the switch to cloud
computing are privacy concerns. The companies using cloud computing solution freely
are basically storing their data on hardware that they do not have any control over.
Companies that handle a lot of personal and top-secret data tend to not use cloud
services exactly for this reason.
Another big concern of companies is security. The protection of the hardware and the
system is entirely up to the cloud provider. If the cloud provider has a security leak the
data of the customer might leak, and he could not do anything about it. In the worst-case
scenario this data could leak to the public or it cloud be sold to a competitor and ruin the
entire company. The model of shared responsibility assumes a lot of trust from the
customer into the cloud provider for not mishandling their data and keeping their data
secure. Cloud providers also are profit oriented private companies which adds a certain
layer of mistrust to the relationship with the customer. Cloud computing platforms are
inherently not invulnerable to hacking attacks.
A good example for this would be the Google deployment in China. The deployment was
been subject to a filter mechanism, which filters our content that the Chinese government
objected. After five years of deployment Google detected that Chinese hackers were
accessing Gmail accounts of Chinese citizens. As a response Google moved all their
servers for Google.ch to Hong Kong. [1]
Globally operating companies must follow regional compliance requirements for the
countries their application is deployed in. This can become an issue when countries have
different legislations for encryption of data at rest or for data in transit. The cloud provider
places the entire compliance burden on the customer. If the customer stages his cloud
resources across multiple states and countries, they might have to follow multiple
jurisdictions. [1]
10
2.1.4 Service models
In a cloud computing platform, the responsibility of the system hosted is always
shared. The cloud provider uses abstraction of the underlying system to offer the
client options to pick the level of responsibility he desires. This abstraction levels
are offered as services following the “XaaS” or “X as a service” naming model.
There are three basic models that have been universally accepted by every cloud
provider.
IaaS offers the client virtual storage, virtual machines and virtual infrastructure.
The cloud provider provides and manages the infrastructure. The client is
responsible for all sorts of deployments. This includes for example the operating
systems on the virtual machines.
6
https://dachou.github.io/2018/09/28/cloud-service-models.html
11
An example for IaaS is the AWS service EC2 or Elastic Compute Cloud. EC2
offers computing resources as virtual instances to the customer. The instances
can be launched with a variety of operating systems.7
PaaS or platform as a service offers the client tools to deploy applications on the
cloud infrastructure. As shown in figure 2, the cloud provider manages the
infrastructure, the operating systems on the platform and the deployment
software. The client has no control over the underlying cloud infrastructure, like
networks, storage and operating systems. He is only responsible for the
application and the data of the application.
An example for PaaS would be AWS Elastic Beanstalk. AWS Elastic Beanstalk is
a service for hosting web applications that offer customers an easy to use
deployment model. The customer only has to upload the source code of the web
application. AWS Elastic Beanstalk handles the provisioning and scaling of the
infrastructure and also the installation and management of the application.8
The responsibility of the client ends with the data he puts into the SaaS
application. The provider is responsible for maintaining the application and the
infrastructure. Most of the time the customer does not even know if the application
is hosted in a cloud environment and which cloud provider the application is
hosted on.
An example for SaaS would be a webservice like Google Drive or Dropbox. [1] [3]
7
https://aws.amazon.com/de/ec2/
8
https://aws.amazon.com/de/elasticbeanstalk/
12
2.2 Software Architecture
To better understand the use cases for serverless applications it is necessary to
first have a look at all the available software architectures for web applications.
2.2.1 Monolith
A monolith or a single-tier model is an application that consists of a single base of
code. All the layers of the application are kept under one roof. This means the
frontend, backend and database are all run on the same machine. In earlier days
this was the most practical solution to deliver software. The monolith is not
automatically inferior to other software architectures. There are certain
advantages that the monolith offers compared to other software architectures.
One advantage of the monolith is that it offers fast and secure access between
processes. It also offers a centralized administration, due to being hosted on a
single resource. Another advantage is that the monolith offers efficient power
utilization, based on being hosted on a single resource.
For a small monolith scalability does not become an issue but compared to other
software architectures the monolithic offers big challenges in terms of
maintenance and scalability of the system.
2.2.2 N-tier
The introduction of virtualization technology made it possible to design software in
a more flexible way. From then on it was not required to have all the software
components bound to a single resource. In the n-tier architecture software
components are split up by their tier. The presentation layer can be split from the
business logic and the data access layer. The client-server architecture is a split
between the presentation from the business and data access tier. The three-tier
architecture is a split between all three tiers. The big advantage of the n-tier
architecture compared to the monolith is that every tier can be deployed and
hosted separately. Teams can now split up their responsibilities and take charge
of a single tier. This could also lead to friction since now for creating a single use
case all the teams need to work together in unison and harmony.
13
2.2.3 Microservice
With new technologies like containerization emerging and rising in popularity,
software architectures also started to adapt and became more flexible in their
deployment approach. The service-oriented architecture (SOA) is a pattern which
does not split the code in tiers or layer but splits the software in services. A bus is
connecting all the services together and makes it possible that services can
communicate with each other. The microservice approach follows the same
principle of a loosely coupled application and goes a step further. The
communication in a microservice architecture is more light weight than in SOA
and uses mainly HTTP protocols and APIs to communicate.
2.2.4 Serverless
The serverless architecture is the next step of virtualization after the microservice
architecture. It decomposes the application into even smaller chunks. These
chunks can be deployed even faster than a microservice. Serverless technology
lowers the time to value drastically. With serverless technology the only thing the
customer has to worry about are the runtime of the function and the function code
itself. The cloud provider handles everything else. [5]
14
3. TECHNICAL KNOWLEDGE
The following chapters introduce core technologies that are required to know for
the practical part of the thesis.
Section 3.1 will discuss serverless functions and the features of a serverless
function.
Section 3.2 will take a closer look at AWS and the core technologies that are
being used in the practical evaluation.
3.1 Serverless
Advancements in the area of virtualization and software architecture led to the
development of the serverless service model. Serverless or also called Function
as a Service (FaaS) offers the customer a simplified model for the deployment of
cloud applications. This is made possible due to new advancements in the area of
containerization and cloud computing. The customers can split the application into
small stateless applications. These stateless applications can be deployed on the
cloud computing platform without the need to manage the underlying
infrastructure. The underlying infrastructure is abstracted away from the user and
maintained by a third party. Serverless functions are decomposed from the rest of
the application and can be run and deployed separately. Different applications can
use different runtimes and programming languages. [6] [7] [8]
15
3.1.1 Event driven architecture
The underlying architecture of serverless functions is called event driven
architecture. Within this architecture every application can subscribe to events
and react on things happening within the surrounding ecosystem. This separation
allows serverless applications to be truly stateless and reactive.
The two main appliances of event driven architecture are called event sourcing
and stream processing. Event sourcing is the model that stores all the state
changes of the system. Stream processing is the consumption of these changes
and use them for further computation. These concepts are also valid within
synchronous and asynchronous application structures. The event command is
basically the starting point of the event driven architecture. In serverless or
microservice applications the dependencies and responsibilities are split up and
distributed. Every single service needs to be able to handle specific event
commands and have a fallback plan for what to do in case if there is no response
from the called service. This ensures that the entire system stays loosely coupled.
To ensure real-time services for customers, the applications within the event
driven architecture should be reactive and non-blocking. The applications are so
loosely coupled that they themselves might not even know that there are other
applications present. They are not taking orders; they are listening for events that
are meaningful for their application.
The mantra for microservices “do one thing and do it well” also applies to
serverless applications. The entire application on global scale consumes an event
stream. Shards of that event stream move through different services that are
listening to events. The services then together with their own data might even
publish their own events. The service does not have to be concerned with the
implementation of the consuming service. The big change compared to classic
architectures like the monolith is that there is no central orchestration for the
services. [10]
16
Serverless applications are very streamlined in the development and in the
hosting process. This enables faster execution times compared to applications
hosted on a normal cloud computing instance. Most cloud providers support the
newest programming languages like Java, NodeJS, Python, Go, and even C#.
[11]
The serverless functions pricing model involves the execution time and the
memory used rather than the existence of the application. This means the user is
exactly paying for what he is using. On most cloud computing platforms, the costs
for serverless applications are not dependent on the total uptime of the service. It
is calculated from the amount of invocations and the storage used for processing.
This makes serverless applications cheaper than a normal cloud computing
instance. Serverless applications can also be used together with microservices as
a mediator. [11] [8]
By splitting the applications into tinier and tinier pieces the overall complexity of
the application rises fast. All these applications need to be maintained separately,
which can lead to a more difficult job of maintenance.
17
With AWS Lambda the initialization of new container can take more than 5
seconds. After handling the first request the cloud provider is keeping the
infrastructure alive and active for a short amount of time, in case it is used again,
which is also called keeping it warm. The advantage is that the infrastructure does
not have to be initialized constantly for every invocation. After a period of
inactivity, the container is destroyed, and a new cold start has to be executed for
the next initialization. This happens at about the 10-minute mark. The cold start
exists because the cloud provider has to provision the runtime container and
infrastructure first, before the actual code can be executed. [13]
It is important for cloud providers to find the right balance between deleting
unused infrastructure and not having to initialize the infrastructure for every API
call. There are a few things someone can do to improve cold start up times. The
easiest way is to shed away unnecessary dependencies from the serverless
function. It causes a larger overhead and slows down the function. Another way
would be to use a custom runtime environment like GraalVM. GraalVM
precompiles the Java code to an executable binary with additional optimizations.
Normally the standard JVM would do this operation before the invocation. This
procedure improves cold start up time by a lot. [12] [14] [15]
18
3.2 Amazon Web Services - AWS
3.2.1 General
Amazon Web Services (AWS) are a cloud provider that offers cloud capabilities
and resources to customers around the world.
Figure 3 shows that for AWS Lambda to run the user just has to upload the
source code and set the trigger for the function.
EC2 or Elastic Compute Cloud are cloud computing instances which customers
can use for hosting every type of software. It is the most popular service on AWS.
Lambda was introduced to combat a very common issue in EC2. EC2 instances
are very static and cannot react to changes in other parts of the system. A good
example would be the upload of an image to S3 and you want your hosted
application to react to the upload in a certain way.
In the AWS ecosystem AWS Lambda works as the glue between other AWS
services and of the AWS ecosystem AWS Lambda is the glue which sticks all the
services together and functions as a mediator between household AWS services
like EC2, DynamoDB or Redshift.
With AWS Lambda the customer only pays for what he really uses. For standard
cloud hosting like an AWS EC2 instance you are billed for the uptime and for the
memory usage. With AWS Lambda you are not billed for the total uptime since for
your function to be counted as up it has to be triggered. You are only paying for
the time the code needs to execute. The time of duration can vary by assigning
different types of memory to the function. [11]
19
3.2.3 Amazon S3
Amazon S3 or simple storage service is an object-based storage solution for the
AWS Cloud. S3 stores files in folders, so called buckets in the AWS cloud. One of
the main features that S3 is used for is backups within the ecosystem. One of the
main purposes of S3 is to offer cloud storage to cloud applications. S3 also offers
versioning, where a user can return to a previously deleted version of a file
seamlessly.
S3 offers different cloud storage variations. They differentiate by the retrieval time
and by the reliability and availability. The service with the longest retrieval time is
called Glacier, which is perfect for archiving data, that also can have an additional
retrieval time and don’t have to be available immediatly. 9
Another very useful feature are CloudWatch Alarms. CloudWatch Alarms are
predefined events that can be triggered when a threshold is crossed. This can
trigger for a balancing mechanism or can trigger SNS to send an E-Mail to a
technician.
9
https://aws.amazon.com/de/s3/features/
10
https://aws.amazon.com/de/api-gateway/features/
11
https://aws.amazon.com/de/cloudwatch/features/
20
3.3 GraalVM
GraalVM is a high-performance virtual machine that promises to improve
application performance and efficiency. It is developed by the Oracle corporation.
GraalVM removes the interoperability between different programming languages
and enables interoperability in a shared runtime.
21
The JIT compiler complies frequently executed code to native code. The JVM
decides which code will be compiled to native code based on profiling information
during the execution. It is more sophisticated than the regular Java compiler and
has many optimizations to generate high-quality machine code. The AOT compiler
offers another way to improve the performance of Java projects. Its main purpose
is to improve the start-up time of the JVM. Since the generation of native code
optimizes the source code for peak performance, but the code is not optimized for
faster loading times, since the code is not yet JIT compiled. AOT improves the
warming-up period at the start of the execution.1213
Figure 5 shows that GraalVM consists of different levels. The Java HotSpot VM is
a high-performance VM the connects the lowest stack of GraalVM. If it is used
alone it can only be used for languages that target the JVM itself. The purpose of
the Hotstpot VM is to target frequently executed code, or “hot spots”, within the
code which will be picked for optimizations. The JVM Compiler interface (JVMCI)
allows to implement a custom optimizing compile just-in-time compiler (JIT). The
heart of GraalVM is the just-in-time compiler called Graal. The Graal compiler is
mainly used for JIT compilation but can also be used for static AOT compilation.
The Truffle Language Implementation Framework is an open source framework
for creating self-optimized interpreters. Truffle uses the abstract syntax tree (AST)
to implement a specific interpreter during the interpretation process. [18]
12
https://www.baeldung.com/ahead-of-time-compilation
13
https://www.baeldung.com/graal-java-jit-compiler
22
GraalVM makes it possible to generate a standalone executable, they are also
referred to as native images. Native images are ahead-of-time compiled Java
code. They contain application classes, dependencies and runtime library classes
from the JDK. The native image does not run with the JVM. It includes all the
necessary components like memory management and thread scheduling through
a different virtual machine called Substrate VM. Substrate VM is the other name
of a collection of runtime components, like deoptimizer and garbage collection.
The creation of a native image achieves a faster start-up and smaller footprint.
The speed of the generated native image will not be at peak performance, but a
fast startup and a low footprint could make a significant difference in a serverless
environment. [18]14 15
14
https://www.graalvm.org/getting-started/#native-images
15
https://www.graalvm.org/docs/reference-manual/native-image/
23
4. PRACTICAL SOLUTION
4.1 Scenario
In the current economy with the shift towards microservice and serverless
applications. Many companies also focus towards faster start up times and leaner
development. As it stands Java is still a powerhouse in the development of
business applications. And will probably stay so for a longer time.
24
One requirement for the cloud application is that the application should at least
call one other cloud service within the cloud platform. This makes sure that the
function is fully operational and usable within a larger cloud application, since the
serverless function could also be implemented as a hello world application.
Another added benefit is that the invocation time will be more time consuming,
therefore it is easier to get a good comparison between the different setups.
Serverless functions can have a wide variety of use cases within their respective
cloud landscape. Within a cloud application, serverless functions have the
purpose of being the glue that connects all the different cloud services together.
This makes it possible for the services to exchange data and create complex
systems. Serverless functions are therefore not designed to run very resource
intensive long-lasting tasks.
The cloud platform also has to have a good monitoring set up for all of the logs
within the cloud application. Since many metrics cannot be returned through a
load testing framework it is important to be able to read additional metrics directly
from the cloud platform itself.
We will experiment with different types of load on the serverless function to see
how the function behaves and scales when required to. For this purpose, we will
be using a state-of-the-art load testing framework which helps us managing our
testing efforts. The resulting data of the load testing will give a better picture which
serverless function handles load better. This can give an indication to which
serverless function will be cheaper in the long run. Furthermore, the functions will
be optimized by using not only the Java Virtual Machine (JVM) but also a custom
runtime environment called GraalVM.
25
4.3 Metrics
4.3.1 Cold start up
The first metric will be the cold start up times of the function. Cold start-ups
happen when a new instance handles their first invocation. The application has to
be initialized which can be a very time-consuming task. For serverless functions,
who are billed by the execution time, a shorter cold start up means that the
invocation is cheaper. A lower cold start up time can often lead to a lower
execution time depending on the operation.
26
4.4 Selection of the cloud provider
The cloud market is highly contested market. Many companies try to throw their name
into the hat and compete with established market leaders. New cloud services are being
created constantly.
As shown in figure 7, the current market leader in the fourth quarter of 2019 is
AWS with a 33% stake of the cloud market. AWS is trailed by Microsoft Azure
which has 18% share of the market.
16
https://www.statista.com/chart/18819/worldwide-market-share-of-leading-cloud-infrastructure-service-
providers/
27
4.5 Evaluation of the Java frameworks
In this chapter we will discuss the criteria that we use to pick the right Java
frameworks for the performance testing. To start off we will introduce the three
key evaluation criteria. Afterwards we will introduce the selected Java
frameworks.
The second criteria that comes into the equation is the perceived usability for the
specific use case. The Java function must be very lightweight to compete with a
plain Java AWS Lambda. A plain Java AWS Lambda tends to be very fast due to
not having to include a lot of dependencies. To get better research data the
selected frameworks cannot have too many dependencies which would affect
start up times. They also have to be optimized for speed and low memory usage.
This eliminates frameworks that are very clunky and robust. Many of these
frameworks come with a lot of out of the box features and libraries. While this
makes them very practical in everyday use, this also makes them unable to
compete with small microframeworks in terms of speed and memory usage.
The third criterion is the public interest in the framework. For this criterion we have
to check how much the framework is used in total and also how accepted the
framework is within the software developer community. This factor can be very
relative and is very hard to measure, since there are no official statistics that
collect this data. JAXenter a german Java site made an effort to gather this
information. They gave their user base a survey find out their interest levels for
different Java application frameworks.
28
Figure 8: JAXenter 2020 Java framwork survey17
In figure 8 we can see that many JAXenter users are very interested in Spring
Boot and Spring. Quarkus and Micronaut also rank very high up in the list.
Another good way to estimate how popular a framework is, is by checking the
GitHub page of the project. The GitHub stars tend to be a good estimator for
public interest and how well project is received within the community. Another
good estimator on GitHub are the open issues of the project. Every framework in
existence can have bugs, but frameworks with a greater public interest and a
more motivated developer team tend to have a better maintenance. This ensures
that open issues are taken care of faster and that the user does not have to wait
for an extended period for the fixes in his application.
With the evaluation process in place we decided on the best frameworks for the
practical evaluation. In the following chapters each of the frameworks will be
introduced.
4.5.2 Spring
Spring is a Java applications framework that covers a variety of use cases. Mainly
it is used to create Java enterprise applications. It also supports Groovy and Kotlin
as alternative languages. Spring is open source and has a large community
behind it.
17
https://jaxenter.de/java/java-trends-frameworks-91786
29
Spring was created in the year 2003 as a response to the early JavaEE
specifications. The framework supports the use of dependency injection and
common annotations. 18
4.5.4 Micronaut
Micronaut is a JVM based Java framework created by the developers of the Grails
Framework. It provides an alternative to the Spring framework by offering faster
start-up times and reduced memory footprint.
4.5.5 Quarkus
Quarkus is a Java framework that is very similar to Micronaut. It is also build for
delivering small subatomic java applications. It is licensed under the Apache
Foundation license and is completely open source. 21
18
https://docs.spring.io/spring/docs/current/spring-framework-reference/overview.html
19
https://spring.io/projects/spring-cloud-function
20
https://docs.micronaut.io/latest/guide/index.html
21
https://quarkus.io/get-started/
30
4.6 Performance testing
For the practical part we need to see how the Java applications behave under
different amounts of stress. This serves as a good decision basis to further
distinguish between the different project setups.
For this purpose, we will create load testing scenarios. Load testing is a way to
test the underlying infrastructure and test if it the provisioned resources are
enough. It can also be used for performance testing, which are used to find out
how the system behaves when it is under the peak of its user load. Load testing
can also help in finding the maximum operating capacity of an application. It also
helps with finding bottlenecks within the application. [19]
There are a few very good load testing frameworks on the market like jMeter,
Gatling and Artillery. For this thesis we will be using the Artillery load testing
framework.
This would execute the scenario and print the results in the command window.
Artillery is also able to do more complex load testing with the inclusion of different
user actions within a scenario.22
22
https://artillery.io/docs/
31
4.7 Architecture
As described in the beginning of this chapter the practical solution consists of two
components, the load testing frameworks on one hand and the cloud application
with all the different serverless functions on the other hand. For the completion of
the task we will be using AWS and cloud services within the AWS ecosystem.
For testing purposes, it was required that the function within the cloud application
calls at least one cloud service. There are a lot of available service within the
AWS landscape. Many of those serve a specific purpose like AWS IoT for the
collection of IoT data or Redshift for data warehousing. In our example we don’t
need to use a complex service like Redshift, we just need a very basic and
minimalistic type of webservice. Since setting up a complex service would require
a lot of time but would not add anything notable to the result, the project will use
S3 as the additional service.
The basic use case of the project is that of a simple file service. The file service
returns the content of a file that is stored on the platform. Within the AWS Lambda
function additional parsing is required to transform the content of the file to a
readable format. For this purpose a new S3 bucket was created.
32
At the forefront of the entire cloud application rests a single API Gateway. The API
Gateway manages the incoming traffic and servers as the main entry point to the
application. This main entry point will be called with the load balancing framework.
There are also other use cases for using an API Gateway like throttling requests
and caching requests. Both of those can have a positive impact on the application
and improve the performance of the application. Both of those features won’t be
used in the cloud application.
AWS Lambda will be used for the deployment of the serverless functions. The
main purpose of the AWS Lambdas is the connection to the S3 bucket. The AWS
Lambdas will be the heart of the examination process. The overall performance of
the cloud application heavily relies on the performance of the AWS Lambda
functions. Since all the serverless functions are hosted with AWS Lambda the
scaling process won’t become an issue.
A S3 bucket will be used for the persistence of the cloud application. Within the
S3 bucket will be a dummy JSON. The S3 storage class used is the standard
level, since there are no advantages from switching to another type.
AWS CloudWatch captures all logs of the cloud services with a cloud application
and makes them accessible for authorized personal. This helps in calculating the
metrics for the memory usage and cold start time. AWS CloudWatch gives the
user a very versatile way of creating custom queries to better understand the logs
of a specific application.
{
"name": "master-thesis-js",
"description": "this is the dummy js for the master thesis"
}
To access the JSON file we have to create an AWS Lambda function which
accesses the data.
33
4.9 Setting up the AWS Lambdas
In the following chapters we will see how the Lambda functions were set up and
created. We will go into the commonalities of all the AWS Lambda setups.
The Java projects within the AWS Lambda cannot connect to the bucket out of the
box. For this purpose, every Java project also includes the AWS Java SDK. They
also have to include the specific sub library for the cloud service that they want to
use. The SDK is available for both Gradle and Maven projects.
AmazonS3 s3 = AmazonS3ClientBuilder.standard()
.withRegion(Regions.EU_CENTRAL_1)
.build();
With this line of code a client for connecting to the S3 bucket is created. The region is set
to the Frankfurt region or also called eu-central-1.
Then the client connects to the S3 bucket and reads the contents of the master-
theis.json. The response is a S3Object.
To use the content of the object we use a helper function called getJsonBody.
This method returns the content of the object as a String.
The contents are then parsed into a string and send back in an API Gateway
friendly manner. Some frameworks like Micronaut and Quarkus are handling the
response format on its own and it is enough to return the body. But with the plain
AWS Lambda it was necessary to change the response to a specific format.
{
statusCode: "...",
headers: {
custom-header: "..."
},
body: "...",
isBase64Encoded: true|false
}
34
The format mandates that the response JSON has to follow a specific format. In
this format the JSON response has to contain a field called statusCode, which
contains a valid HTTP status code. It should also contain a body field, which
contains a string. For the binary support, it should also contain a flag called
isBase64Encoded. It is also possible to include a list of custom API specific
headers.
return ApiGatewayResponse.builder()
.setStatusCode(200)
.setObjectBody(responseBody)
.setHeaders(Collections.singletonMap("X-Function",
functionName))
.build();
To uphold the API Gateway response format, a custom POJO that encapsulates
the response format was created. If the format is not upheld the API Gateway
blocks the response and response body from leaving the cloud application. The
response body from the API Gateway to the frontend will then be empty.23
Table 1 shows the used version for the serverless functions. At the time of writing
the currently newest versions of the frameworks were used for the development
process. Since then Micronaut has come out with their new 2.0.0 version and
Quarkus has just released their 1.3.7 version. All the functions using frameworks
were created with the online project builder tools that is on each of their website.
For Micronaut an upgrade to 2.0.0 seemed unreasonable at the time, since the
popularity and usage was a key factor for the framework selection. A brand-new
major release of a framework does not have a high usage.
23
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-integration-settings-
integration-response.html
35
4.9.3 Deployment
There are multiple ways to deploy an AWS Lambda function unto the system of
the could provider. Some of them have a drawback but in general all of them offer
the same functionality.
If you are using a framework for the creation of the Lambda function, it is very
likely that the framework comes with their own command line interface (CLI).
Many of these framework specific CLIs also have an option to deploy the
serverless function smoothly. This should be the preferred when using
frameworks, since external dependencies are not managed by the build tool and
hassling with dependencies can become a tedious task.
The SAM framework gives a lot of benefits to the developer on the AWS platform,
but it is still an AWS product. To unify all cloud providers into one framework, the
serverless framework was created. It gives the developer similar options as the
SAM CLI, but instead of using the SAM template language it uses their own
templating language. 25
The last option, which is the most straight forward, is uploading the JAR file to the
AWS Lambda directly. In AWS that can be done in Lambda details page.
24
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html
25
https://www.serverless.com/framework/docs/
36
Name Deployment
aws-spring Zip
Table 2 shows the used deployment technologies for each serverless function.
For the deployment a wide variety of different deployment options were used in
this project. The deployment type does not affect the results of the performance
tests.
The scenarios cannot be too similar since the data then will be very similar
between all the serverless setups.
Scenario 1 400 10 40
Scenario 2 1000 50 20
Table 3 shows the used performance testing scenarios that will be used later on.
Table 3 also shows that the two scenarios work in very different ways. The first
scenario is a classic load test which creates a lot of load in a very short time. The
scenario is set up that it takes 10 seconds and every second there will be 40
requests to the backend. The goal of the second scenario is to put a large amount
of stress on the system and see how the system deals with it. The second
scenario has a more ramped up approach. The load is distributed over a longer
time frame, which helps when thinking about scaling and helps functions who
would otherwise skyrocket in the first scenario by giving them more time.
37
4.10.1 Setting up the load testing
As previously described, we will use the Artillery framework for the actual load
testing. Artillery uses YAML files to define testing scenarios. These scenarios can
be executed with the build in CLI.
With this we can create and execute the listed scenarios. The Artillery framework
returns the results from the tests in the console window. This will only give us the
execution times of the application. For the memory usage and the cold start metric
it was necessary to go a different direction and get the data directly from AWS
with CloudWatch.
To be able to get the cold start times the following query was used.
To get the memory usage of the serverless functions the following query was
used.
38
This query returns the following values from the CloudWatch logs.
Name Average cold start duration Cold starts Max cold starts
(ms) duration (ms)
39
Table 4 shows that the first scenario put significant load on the backend and
forced AWS to scale out massively. Every invocation hat to create in an extra
instance, none of the serverless functions was able to complete the task and use
the capacities for a second invocation. This means that the load overall was too
much to handle for the system. The load could have been very close or above the
peak load that the system could handle. Out of all the setups the plain setup
performed the best and had the lowest average initialization duration with 548ms.
The spring cloud serverless function performed the worst with an average
initialization duration of 3797ms. Table 5 shows that the plain function did also
perform best in the second scenario with a similar average initialization duration of
548ms. In the second scenario we can see that the system was not pushed to its
limit and could reuse computing capacities for the invocations. With that in mind
the number of cold starts can influence the overall performance of the serverless
function. In that regard the plan function also had the lowest number of cold starts
while the spring cloud function hade the highest number of cold starts.
40
Name Average Max Unused Default Min
memory (MB) memory memory (MB) Memory Memory
(MB) (MB) (MB)
Table 6 shows that the overall memory usage is very equal between the
serverless functions. The plain function performs marginally better than the rest
with an average memory usage of 166 MB. The serverless function with the most
unused memory was the Quarkus function with a unused capacity of 309 MB. In
this scenario every function started 400 new instances for 400 invocations which
can result in very similar results. This can be attributed to the fact that the started
instances weren’t reused for other invocations The results for the second scenario
look very similar compared to the first scenario. Table 7 shows that the plain
function again performs best compared to the other functions with an average
memory usage of 166 MB. The “lowest memory” number of the plain function
indicates that it has profited the most from the longer timeframe of the scenario.
This can be attributed to the lowest cold startup up time and the lowest number of
cold starts. Because of this the serverless function have the potential to be kept
warm for an invocation earlier and additional memory won’t have to be used for
this invocation. The memory usage overall was a little bit higher in the second
scenario than in the first scenario. This can be because the serverless functions
were kept warm for a longer period of time.
41
Name Average execution Min execution time Max execution
time (ms) (ms) time (ms)
As shown in table 8, the stress on the system made the execution time skyrocket.
Every function had an above 10 seconds average execution time. The best
performer was the Quarkus function with an average execution time of 10467 ms.
This makes it clear why the system scaled up so massively in first scenario and
start 400 instances for 400 requests. The time of the first scenario was 10
seconds which is lower than the minimum execution time of all the functions.
Table 9 shows that the serverless functions performed better overall in the second
scenario. This time the load seems to be scaled and distributed much easier. Of
the functions the Micronaut function outperformed the rest of the pack with an
average execution time of 594ms.
42
4.11 Optimization with GraalVM
The serverless functions as they stand leave a little bit of room for further
improvement open. To further optimize the usage of the Java language with
serverless functions we can use another Java Virtual Machine called GraalVM.
Table 10 shows the version for the Micronaut serverless function using GraalVM.
For the new Lambda function Micronaut 2.0.0 will be used. The reason for this is
that Micronaut 2.0.0 comes with a lot of improvements for GraalVM. The updates
include improvements in the deployment process and especially in regards to the
creation of the native image. To use the new function as an AWS Lambda
function it has to be deployed to a custom runtime.
With Micronaut you can go either way. It is easier to follow the documentation
which creates the GraalVM image with a Dockerfile. First you have to create a
native-image.properties file, within are the specific options that you can set for the
native image creation.
43
Option Explanation
-J-Xmx2g Raises the ceiling for the RAM usage. The creation
of a native image is a very resource intensive task.
Table 11 shows all the native image options that were used in the creation of the
native image. This leaves us with the following options for the native image.
Args = -H:Name=aws-micronaut-app \
-H:Class=io.micronaut.function.aws.runtime.MicronautLambdaRuntime \
-J-Xmx2g \
--no-fallback \
-H:+ReportExceptionStackTraces \
--report-unsupported-elements-at-runtime \
-H:DynamicProxyConfigurationFiles=dynamic-proxies.json
44
4.11.2 Deployment of native image
The next step after creation of the native image is to deploy the native image to
AWS. AWS Lambdas can be implemented with any programming language. To
be able to use the new programming language a runtime has to be included in the
deployment package. This comes in the form of an executable file named
bootstrap. The custom runtime can be a shell script, a script in a language that
can be run on Amazon Linux, or a binary executable file. The bootstrap file serves
as the entry point for the custom runtime. The bootstrap file for the Micronaut
application looked the following way.
#!/bin/sh
set -euo pipefail
./aws-micronaut-app -Xmx128m -Djava.library.path=$(pwd)
This simple bootstrap file executes the binary executable of the Micronaut Lambda
function. To deploy the native image the bootstrap file and the binary executable have to
be added to a zip file. Then the zip file can be uploaded over the AWS Lambda detail
page.
Name Average cold start Cold starts Max cold starts duration
duration (ms) (ms)
45
Name Average cold start Cold starts Max cold starts duration
duration (ms) (ms)
Table 12 shows that GraalVMs promises of the improved cold start up time were
not a hoax. The amount of cold starts dropped to a 64 cold starts for the high
capacity scenario. GraalVM could handle the load better than all the JVM bases
functions and is slightly faster than even the plain Java function with an average
cold start time of 437 ms. As show in table 13, the results look very similar to the
first scenario. In terms of cold start up time GraalVM is marginally better than the
plain function. For the same duration GraalVM only has 28 cold start while the
other two contestants had over 200. This can give a clear indication that GraalVM
will also perform very good in the comparison of the memory usage and execution
time.
46
Name Average Max Unused Default Min
memory memory memory (MB) Memory Memory
(MB) (MB) (MB) (MB)
As shown in table 14 the GraalVM and Micronaut combination had the lowest
average memory with 128 MB. All of the functions struggled with the load of the
first scenario. In this scenario GraalVM could only slightly outperform the plain
functionality. According to table 15 the results for the memory usage of GraalVM
based function and the JVM based functions look very similar. The deciding factor
this time is the memory usage for the faster requests. The lowest memory used
for an invocation drops to 25 MB for the GraalVM function. The lower number for
cold starts of the second scenario affects this metric tremendously, because less
cold starts mean less instances were reused for handling the load.
47
Name Average execution Min execution time Max execution
time (ms) (ms) time (ms)
Table 16 shows that the trend of improved memory usage and improve cold start
times gives GraalVM an enormous boost in the execution time. For the load of
scenario 1 GraalVM has an average execution time of 1190 ms, which shows that
the function was fast enough to reuse instances that are being kept warm over the
duration of the test. As shown in table 17, the GraalVM function also had a
significant lower average execution time in the second scenario with 277 ms.
48
5. CONCLUSION
This thesis aimed to find the most optimal solution for the hosting of a Java
serverless function. For this purpose, many Java frameworks have been
compared and distilled down to a few who looked promising. Specific frameworks
are created to fulfil a specific purpose and offer a generalized solution to a
problem. Some frameworks simply were not compatible and would have not been
able to compete with the slimmer microframeworks.
After the selection process the serverless functions have then been load tested to
see how the setups compare with each other. Based on the data that came from
the load tests, it can be concluded that the market is still highly contested and
very open. The Java frameworks had very similar results in memory usage and
execution time. The differences between the frameworks started with comparison
of the cold start-up time.
The comparison of cold start-up times can give a very good indication which
serverless function scales better when faced with an even higher load of traffic. In
this department Micronaut has been the clear winner compared to its adversaries.
The plain serverless function outcompeted all the frameworks which can be
attributed to the fact that it is just half the size and has less dependencies than the
other serverless functions. What came as a big surprise is that the plain function
is not that far ahead of its competition as I previously expected. In terms of
execution time Micronaut even outperformed the plain function. This clearly shows
that the microframeworks are not that far behind in terms of execution time and
memory usage.
49
Bibliography
50
[19] „What is Load testing in software testing? Examples,How To Do,Importance,
Differences,“ [Online]. Available: http://tryqa.com/what-is-load-testing-in-
software/.
51
List of figures
Figure 1: AWS shared responsibility model [4] 8
Figure 2: cloud service models 11
Figure 3: AWS Lambda [11] 19
Figure 4: GraalVM architecture [17] 21
Figure 5: GraalVM structure [18] 22
Figure 6: concept cloud application 24
Figure 7: cloud market share 27
Figure 8: JAXenter 2020 Java framwork survey 29
Figure 9: Cloud architecture practical solution 32
52
List of tables
Table 1: List of serverless functions ................................................................................35
Table 2: Deployment types ............................................................................................. 37
Table 3: performance testing scenarios ..........................................................................37
Table 4: Results cold start scenario 1 .............................................................................39
Table 5: Results cold start scenario 2 .............................................................................39
Table 6: Results memory usage scenario 1 ....................................................................40
Table 7: Results memory usage scenario 2 ....................................................................41
Table 8: Result execution time scenario 1 .......................................................................42
Table 9: Results execution time scenario 2 .....................................................................42
Table 10: AWS GraalVM Lambda setup .........................................................................43
Table 11: Native image options ....................................................................................... 44
Table 12: Result cold start GraalVM scenario 1 .............................................................. 45
Table 13: Result cold start GraalVM scenario 2 .............................................................. 46
Table 14: Result memory usage GraalVM scenario 1 ..................................................... 46
Table 15: Result memory usage GraalVM scenario 2 ..................................................... 47
Table 16: Result execution time GraalVM scenario 1 ...................................................... 47
Table 17: Result execution time GraalVM scenario 2 ...................................................... 48
53