This is a presentation I made for the Kraków Java User Group on test automation and how to solve the challenges around it to make it really useful for development teams. It contains some examples of how we are doing it at Akamai's Web department, and some based on my own experience.
30 of the best free software test tools in 60 minutes by Jess Lancaster
Report
Share
1 of 53
More Related Content
Creating testing tools to support development
1. Creating Testing Tools to Support Development
Chema del Barco
SDET Manager, Akamai Technologies
2. Warming up
How many of you…
… Work as dedicated testers?
… Create automated tests?
… Think you have enough tests?
… Rely on automated tests from testers?
… Trust these automated tests?
3. Warming up
How do we use test automation at Akamai?
• Different products & systems
• Java, JS, Ruby, Python…
• A lot of backend, less frontend
• Part of workflow
• We also do manual testing
• Lots of challenges
6. Not testing the right thing
Real Life Example:
Volvo’s Automatic Collision Avoidance
(2010)
7. Not testing the right thing
https://www.youtube.com/watch?v=aNi17YLnZpg
8. Not testing the right thing - lessons learned
1. Everybody makes mistakes
2. There is no such thing as 100% test coverage
3. There are always risks involved in any delivery
4. Testing can tell us where these risks are
5. We can mitigate risks if we know them
17. The Test Environment Hell
• Complex applications tend to rely on big Integrated
Environments for validating changes
• Environments are so complex that require constant
troubleshooting and maintenance
• Testers (and builds) are often blocked and teams
spend a lot of time investigating if problem is app or
environment related
18. Build UT
IT ST Delivery
Build Validate
Dev
Stream
SDET
Stream
UT – Unit Testing Promote – Deploy to Next
CT – Component Testing Test Environment
IT – Integration Testing Push – Manual code push
ST – System Testing
SIT – System Integration Testing
Promote
Push
DEV Environment
INT Environment
Push
Ex: Akamai’s Luna Control Center test environments
19. Build UT
IT ST Delivery
Build Validate
Dev
Stream
SDET
Stream
UT – Unit Testing Promote – Deploy to Next
CT – Component Testing Test Environment
IT – Integration Testing Push – Manual code push
ST – System Testing
SIT – System Integration Testing
Promote
Push
DEV Environment
INT Environment
Push
Ex: Akamai’s Luna Control Center test environments
21. 1. Test the right thing in the right environment
“Separation of Concerns”:
1. Test that your change works, in isolation (UT)
2. Test that your app works with your change, in
isolation (CT / IT)
3. Test that your app works with your change, in
integration (ST)
4. Test that your system works in integration (SIT)
22. 1. Test the right thing in the right environment
• Isolated Component (Service) Testing
• Exclusive for Team
• MockServer to mock external
dependencies (provided by the
service owners)
• SUT thinks it’s in production
• Can test also Integration (Contract)
• Very stable, maintained by
devs/testers
• Testers can automate and validate
faster
• Everyone is less frustrated
Service
Under
Test (SUT)
Service 2
(MockServer)
Service 4
(MockServer)
Service 3
(MockServer)
Local Test Env.
23. Build UT CT
IT ST SIT Delivery
Build Validate
Dev
Stream
SDET
Stream
Promote
Push
Promote
DEV Environments
INT Environment SYS Environment
Push
QA Environment
1. Test the right thing in the right environment
If a test fails here it we know for
a fact that it will be because of:
1. The environment
2. A service not following its API
contract, or
3. An actual integration bug
(+ IT + ST)
24. 2. Implement automatic retry-on-error techniques
Some test frameworks like TestNG allow for several ways
to automatically retry failed tests, by:
1. Running “<test-outputs>testng-failed.xml” after a run with failed
tests [Good]
2. Run TestNG programmatically and implement a “retry test”
transformer to the @Test annotation [Great!]
3. If you don’t want to retry all, you can create a @Retry annotation
to implement the “retry test” listener [Great!]
28. 3. Use plugins in your CI to detect flaky tests
https://wiki.jenkins-ci.org/display/JENKINS/Test+Results+Analyzer+Plugin
29. Challenges of test automation
3. Doing test automation for the
wrong reasons
30. 1. “I don’t have to test anything manually”
Manual Testing
is CRUCIAL in
Agile because
machines cannot
think outside the
box …Yet ;)
“Agile Testing: A Practical Guide for Testers and Agile Teams”, Lisa Crispin & Janet Gregory
31. 2. Testing as a separated work flow
Build UT
IT/ST OKBuild
DEV Environments
QA Environment
Dev
Stream
Test
Stream
FAIL
“Ready
to Test”
Development “DONE”
Tester waits for
changes to be
deployed and runs
tests
Development “STARTS”
Tester provides FEEDBACK
32. Not understanding WHY we need test automation
Build UT CT
IT ST SIT Delivery
Build Validate
Promote
Push
Promote
DEV Environments
INT Environment SYS Environment
Push
QA Environment
Build
Pipeline
Test
Stream
The goal of test automation is to provide FEEDBACK
Test Automation should be another form of delivery
33. Not understanding WHY we need test automation
• Feedback MUST be:
• Reliable (no false positives/negatives)
• Fast (as early as possible)
• Scalable (keeps being fast when growing)
• Runnable by anyone
• It’s a WHOLE TEAM thing
35. Building vs. Using
• Now that testers are toolsmiths, they can also be
infected by the ”building-everything-from-scratch”
disease
• There is a tool for pretty much anything. If there is not,
search again (at least you should find a starting point)
• A good tester should always try to find the right tool
for the right kind of testing
36. ● In 2014 we had to send 10k+ HTML emails to our users
● HTML emails are inconsistently rendered by different email clients:
• Some do not support HTML at all,
• Some do not render it consistently with W3C specifications
● You never know which email client your users are using
• Desktop clients: Outlook 2002, 2013, Thunderbird
• Web clients: GMail (in Chrome, FF, IE), Yahoo! Mail, etc
● You cannot easily automate if email looks good, but you can automate
rendering in ~10 mail clients
Example
47. Example: the pain of a real delivery pipeline
Days Left Pass
%
Notes
1 5% Everything is broken! Signing in to the service is broken... Almost all
tests sign in a user so almost all tests failed.
0 4% A partner team we rely on deployed a bad build to their testing
environment yesterday.
-1 54% A dev broke the save scenario yesterday (or the day before?). Half the
tests save a document at some point in time. Devs spent most of the
day determining if it's a frontend bug or a backend bug
-2 54% It's a frontend bug, devs spent half of today figuring out where
-3 54% A bad fix was checked in yesterday. The mistake was pretty easy to
spot, though, and a correct fix was checked in today.
http://googletesting.blogspot.com/2015/04/just-say-no-to-more-end-to-end-tests.html
48. Example: the pain of a real delivery pipeline
Days
Left
Pass % Notes
-4 1% Hardware failures occurred in the lab for our testing environment.
-5 84% Many small bugs hiding behind the big bugs (e.g., sign-in broken, save
broken). Still working on the small bugs.
-6 87% We should be above 90%, but are not for some reason.
-7 89.54% (Rounds up to 90%, close enough.) No fixes were checked in
yesterday, so the tests must have been flaky yesterday
http://googletesting.blogspot.com/2015/04/just-say-no-to-more-end-to-end-tests.html