Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Gatling Reports

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Contents

Injection.......................................................................................................................................................1
Injection profiles, differences between open and closed workload models............................................1
Open Model.................................................................................................................................................1
Closed Model...............................................................................................................................................2
Meta DSL.....................................................................................................................................................3
Concurrent Scenarios..................................................................................................................................3
Sequential Scenarios...................................................................................................................................4
HTTP Engine................................................................................................................................................5
maxConnectionsPerHost.........................................................................................................................5
shareConnections........................................................................................................................................5
enableHttp2.................................................................................................................................................5
virtualHost...................................................................................................................................................6
REPORTS......................................................................................................................................................6

1
Injection

Injection profiles, differences between open and closed workload models.

Open vs Closed Workload Models

 Closed systems, where you control the concurrent number of users


 call center where all operators are busy
 ticketing websites where users get placed into a queue when the system is at full
capacity

 Open systems, where you control the arrival rate of users


 users keep on arriving even though applications have trouble serving them. Most
websites behave this way.

Open Model

setUp(

scn.injectOpen(

nothingFor(4), // 1

atOnceUsers(10), // 2

rampUsers(10).during(5), // 3

constantUsersPerSec(20).during(15), // 4

constantUsersPerSec(20).during(15).randomized(), // 5

rampUsersPerSec(10).to(20).during(10), // 6

rampUsersPerSec(10).to(20).during(10).randomized(), // 7

stressPeakUsers(1000).during(20) // 8

).protocols(httpProtocol)

);

The building blocks for open model profile injection are:

2
1. nothingFor(duration): Pause for a given duration.
2. atOnceUsers(nbUsers): Injects a given number of users at once.
3. rampUsers(nbUsers).during(duration): Injects a given number of users distributed evenly on a
time window of a given duration.
4. constantUsersPerSec(rate).during(duration): Injects users at a constant rate, defined in users per
second, during a given duration. Users will be injected at regular intervals.
5. constantUsersPerSec(rate).during(duration).randomized: Injects users at a constant rate, defined
in users per second, during a given duration. Users will be injected at randomized intervals.
6. rampUsersPerSec(rate1).to.(rate2).during(duration): Injects users from starting rate to target
rate, defined in users per second, during a given duration. Users will be injected at regular
intervals.
7. rampUsersPerSec(rate1).to(rate2).during(duration).randomized: Injects users from starting rate
to target rate, defined in users per second, during a given duration. Users will be injected at
randomized intervals.
8. stressPeakUsers(nbUsers).during(duration): Injects a given number of users following a smooth
approximation of the heaviside step function stretched to a given duration.

Closed Model

setUp(

scn.injectClosed(

constantConcurrentUsers(10).during(10), // 1

rampConcurrentUsers(10).to(20).during(10) // 2

);

The building blocks for closed model profile injection are:

1. constantConcurrentUsers(nbUsers).during(duration): Inject so that number of concurrent users


in the system is constant
2. rampConcurrentUsers(fromNbUsers).to(toNbUsers).during(duration): Inject so that number of
concurrent users in the system ramps linearly from a number to another

Meta DSL

3
It is possible to use elements of Meta DSL to write tests in an easier way. If you want to chain levels and
ramps to reach the limit of your application (a test sometimes called capacity load testing), you can do it
manually using the regular DSL and looping using map and flatMap. But there is now an alternative using
the meta DSL.

setUp(

// generate an open workload injection profile

// with levels of 10, 15, 20, 25 and 30 arriving users per second

// each level lasting 10 seconds

// separated by linear ramps lasting 10 seconds

scn.injectOpen(

incrementUsersPerSec(5.0)

.times(5)

.eachLevelLasting(10)

.separatedByRampsLasting(10)

.startingFrom(10) // Double

);

Concurrent Scenarios

setUp(

scenario1.injectOpen(injectionProfile1),

scenario2.injectOpen(injectionProfile2)

);

Sequential Scenarios

setUp(

parent.injectClosed(injectionProfile)

4
// child1 and child2 will start at the same time when last parent user will terminate

.andThen(

child1.injectClosed(injectionProfile)

// grandChild will start when last child1 user will terminate

.andThen(grandChild.injectClosed(injectionProfile)),

child2.injectClosed(injectionProfile)

).andThen(

// child3 will start when last grandChild and child2 users will terminate

child3.injectClosed(injectionProfile)

);

HTTP Engine

maxConnectionsPerHost

In order to mimic real web browser, Gatling can run multiple concurrent connections per virtual user
when fetching resources on the same hosts over HTTP/1.1. By default, Gatling caps the number of

5
concurrent connections per remote host per virtual user to 6, which makes sense for modern browsers.
You can change this number with maxConnectionsPerHost(max: Int).

http.maxConnectionsPerHost(10);

shareConnections

The default behavior is that every virtual user has its own connection pool and its own SSLContext. This
behavior meets your needs when you want to simulate internet traffic where each virtual user simulates
a web browser.

Instead, if you want to simulate server to server traffic where the actual client has a long-lived
connection pool, you want to have the virtual users share a single global connection pool.

http.shareConnections();

enableHttp2

HTTP/2 experimental support can be enabled with the .enableHttp2 option.

Note that you’ll either need your injectors to run with Java 9+, or make sure that
gatling.http.ahc.useOpenSsl wasn’t turned to false in Gatling configuration.

http.enableHttp2();

virtualHost

// with a static value

http.virtualHost("virtualHost");

// with a Gatling EL string

6
http.virtualHost("#{virtualHost}");

// with a function

http.virtualHost(session -> session.getString("virtualHost"));

Requests

Response Time

The response time is the elapsed time between:

the instant Gatling tries to send a request. It accounts for:

 DNS resolution time (might be bypassed if it’s already cached). Note that the DNS
resolution time metric is available in Gatling Enterprise.
 TCP connect time (might be bypassed if a keep-alive connection is available in the
connection pool). Note that the TCP connect time metric is available in Gatling
Enterprise.
 TLS handshake time (might be bypassed if a keep-alive connection is available in the
connection pool). Note that the TLS handshake time metric is available in Gatling
Enterprise.
 HTTP round trip
 the instant Gatling receives a complete response or experiences an error (timeout,
connection error, etc)

Groups

Count
The counts are the number of group executions, not the sum of the counts of each individual request in
that group.

Response Time
The response time of a group is the cumulated response times of each individual request in that group.

7
Duration

Group duration is the elapsed time between the instant a virtual user enters a group and the instant it
exits.

 Group duration is reported in the “Duration” charts.

Cumulated Response Time


Group cumulated response time is the time in a group when requests are flying: requests’ response time
and resources start to end duration. In short, it’s the group duration minus the pauses.

 Group cumulated response time is reported in the “Cumulated Response Time” charts.

REPORTS
Graph Explanation
Response time ranges

This chart shows how response


times are distributed among
standard ranges. The right
panel show number of OK/KO
requests.

Summary

The top panel shows some


standard statistics such as min,
max, average, standard
deviation and percentiles
globally and per request.

8
Active users over time

This chart displays the active


users during the simulation :
total and per scenario.

“Active users” is neither


“concurrent users” or “users
arrival rate”. It’s a kind of mixed
metric that serves for both
open and closed workload
models and that represents
“users who were active on the
system under load at a given
second”.

(number of alive users at


previous second)
+ (number of users that were
started during this second)
- (number of users that were
terminated during previous
second)
Response time distribution

This chart displays the


distribution of the response
times.

Requests per second over time

This chart displays the number


of requests sent per second
over time.

Responses per second over


time

This chart displays the number


of responses received per
second over time, total,
successes and failures.

9
Response Time against Global
RPS

This chart shows how the


response time for the given
request is distributed,
depending on the overall
number of request at the same
time.

10

You might also like