Gatling Reports
Gatling Reports
Gatling Reports
Injection.......................................................................................................................................................1
Injection profiles, differences between open and closed workload models............................................1
Open Model.................................................................................................................................................1
Closed Model...............................................................................................................................................2
Meta DSL.....................................................................................................................................................3
Concurrent Scenarios..................................................................................................................................3
Sequential Scenarios...................................................................................................................................4
HTTP Engine................................................................................................................................................5
maxConnectionsPerHost.........................................................................................................................5
shareConnections........................................................................................................................................5
enableHttp2.................................................................................................................................................5
virtualHost...................................................................................................................................................6
REPORTS......................................................................................................................................................6
1
Injection
Open Model
setUp(
scn.injectOpen(
nothingFor(4), // 1
atOnceUsers(10), // 2
rampUsers(10).during(5), // 3
constantUsersPerSec(20).during(15), // 4
constantUsersPerSec(20).during(15).randomized(), // 5
rampUsersPerSec(10).to(20).during(10), // 6
rampUsersPerSec(10).to(20).during(10).randomized(), // 7
stressPeakUsers(1000).during(20) // 8
).protocols(httpProtocol)
);
2
1. nothingFor(duration): Pause for a given duration.
2. atOnceUsers(nbUsers): Injects a given number of users at once.
3. rampUsers(nbUsers).during(duration): Injects a given number of users distributed evenly on a
time window of a given duration.
4. constantUsersPerSec(rate).during(duration): Injects users at a constant rate, defined in users per
second, during a given duration. Users will be injected at regular intervals.
5. constantUsersPerSec(rate).during(duration).randomized: Injects users at a constant rate, defined
in users per second, during a given duration. Users will be injected at randomized intervals.
6. rampUsersPerSec(rate1).to.(rate2).during(duration): Injects users from starting rate to target
rate, defined in users per second, during a given duration. Users will be injected at regular
intervals.
7. rampUsersPerSec(rate1).to(rate2).during(duration).randomized: Injects users from starting rate
to target rate, defined in users per second, during a given duration. Users will be injected at
randomized intervals.
8. stressPeakUsers(nbUsers).during(duration): Injects a given number of users following a smooth
approximation of the heaviside step function stretched to a given duration.
Closed Model
setUp(
scn.injectClosed(
constantConcurrentUsers(10).during(10), // 1
rampConcurrentUsers(10).to(20).during(10) // 2
);
Meta DSL
3
It is possible to use elements of Meta DSL to write tests in an easier way. If you want to chain levels and
ramps to reach the limit of your application (a test sometimes called capacity load testing), you can do it
manually using the regular DSL and looping using map and flatMap. But there is now an alternative using
the meta DSL.
setUp(
// with levels of 10, 15, 20, 25 and 30 arriving users per second
scn.injectOpen(
incrementUsersPerSec(5.0)
.times(5)
.eachLevelLasting(10)
.separatedByRampsLasting(10)
.startingFrom(10) // Double
);
Concurrent Scenarios
setUp(
scenario1.injectOpen(injectionProfile1),
scenario2.injectOpen(injectionProfile2)
);
Sequential Scenarios
setUp(
parent.injectClosed(injectionProfile)
4
// child1 and child2 will start at the same time when last parent user will terminate
.andThen(
child1.injectClosed(injectionProfile)
.andThen(grandChild.injectClosed(injectionProfile)),
child2.injectClosed(injectionProfile)
).andThen(
// child3 will start when last grandChild and child2 users will terminate
child3.injectClosed(injectionProfile)
);
HTTP Engine
maxConnectionsPerHost
In order to mimic real web browser, Gatling can run multiple concurrent connections per virtual user
when fetching resources on the same hosts over HTTP/1.1. By default, Gatling caps the number of
5
concurrent connections per remote host per virtual user to 6, which makes sense for modern browsers.
You can change this number with maxConnectionsPerHost(max: Int).
http.maxConnectionsPerHost(10);
shareConnections
The default behavior is that every virtual user has its own connection pool and its own SSLContext. This
behavior meets your needs when you want to simulate internet traffic where each virtual user simulates
a web browser.
Instead, if you want to simulate server to server traffic where the actual client has a long-lived
connection pool, you want to have the virtual users share a single global connection pool.
http.shareConnections();
enableHttp2
Note that you’ll either need your injectors to run with Java 9+, or make sure that
gatling.http.ahc.useOpenSsl wasn’t turned to false in Gatling configuration.
http.enableHttp2();
virtualHost
http.virtualHost("virtualHost");
6
http.virtualHost("#{virtualHost}");
// with a function
Requests
Response Time
DNS resolution time (might be bypassed if it’s already cached). Note that the DNS
resolution time metric is available in Gatling Enterprise.
TCP connect time (might be bypassed if a keep-alive connection is available in the
connection pool). Note that the TCP connect time metric is available in Gatling
Enterprise.
TLS handshake time (might be bypassed if a keep-alive connection is available in the
connection pool). Note that the TLS handshake time metric is available in Gatling
Enterprise.
HTTP round trip
the instant Gatling receives a complete response or experiences an error (timeout,
connection error, etc)
Groups
Count
The counts are the number of group executions, not the sum of the counts of each individual request in
that group.
Response Time
The response time of a group is the cumulated response times of each individual request in that group.
7
Duration
Group duration is the elapsed time between the instant a virtual user enters a group and the instant it
exits.
Group cumulated response time is reported in the “Cumulated Response Time” charts.
REPORTS
Graph Explanation
Response time ranges
Summary
8
Active users over time
9
Response Time against Global
RPS
10