Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
Creating Testing Tools to Support Development
Chema del Barco
SDET Manager, Akamai Technologies
Warming up
How many of you…
… Work as dedicated testers?
… Create automated tests?
… Think you have enough tests?
… Rely on automated tests from testers?
… Trust these automated tests?
Warming up
How do we use test automation at Akamai?
• Different products & systems
• Java, JS, Ruby, Python…
• A lot of backend, less frontend
• Part of workflow
• We also do manual testing
• Lots of challenges
Challenges of test automation
Challenges of Test Automation
Challenges of test automation
1. Not testing the right thing
Not testing the right thing
Real Life Example:
Volvo’s Automatic Collision Avoidance
(2010)
Not testing the right thing
https://www.youtube.com/watch?v=aNi17YLnZpg
Not testing the right thing - lessons learned
1. Everybody makes mistakes
2. There is no such thing as 100% test coverage
3. There are always risks involved in any delivery
4. Testing can tell us where these risks are
5. We can mitigate risks if we know them
Not testing the right thing
Some failed demos later (2014)…
Not testing the right thing
https://www.youtube.com/watch?v=kWiwS-43xpk
Not testing the right thing
How can we mitigate this?
Not testing the right thing
Don’t lose the user’s point of view
PO/Testers can use
BDD / Spec by Example
Not testing the right thing
Developers
can use TDD
Not testing the right thing
Or you can all work
together to use ATDD!
Not testing the right thing
Alternative approach:
Use Domain Abstractions
(ex: Actors & Actions)
Challenges of test automation
2. The Test Environment Hell
The Test Environment Hell
• Complex applications tend to rely on big Integrated
Environments for validating changes
• Environments are so complex that require constant
troubleshooting and maintenance
• Testers (and builds) are often blocked and teams
spend a lot of time investigating if problem is app or
environment related
Build UT
IT ST Delivery
Build Validate
Dev
Stream
SDET
Stream
UT – Unit Testing Promote – Deploy to Next
CT – Component Testing Test Environment
IT – Integration Testing Push – Manual code push
ST – System Testing
SIT – System Integration Testing
Promote
Push
DEV Environment
INT Environment
Push
Ex: Akamai’s Luna Control Center test environments
Build UT
IT ST Delivery
Build Validate
Dev
Stream
SDET
Stream
UT – Unit Testing Promote – Deploy to Next
CT – Component Testing Test Environment
IT – Integration Testing Push – Manual code push
ST – System Testing
SIT – System Integration Testing
Promote
Push
DEV Environment
INT Environment
Push
Ex: Akamai’s Luna Control Center test environments
The Test Environment Hell
How can we mitigate this?
1. Test the right thing in the right environment
“Separation of Concerns”:
1. Test that your change works, in isolation (UT)
2. Test that your app works with your change, in
isolation (CT / IT)
3. Test that your app works with your change, in
integration (ST)
4. Test that your system works in integration (SIT)
1. Test the right thing in the right environment
• Isolated Component (Service) Testing
• Exclusive for Team
• MockServer to mock external
dependencies (provided by the
service owners)
• SUT thinks it’s in production
• Can test also Integration (Contract)
• Very stable, maintained by
devs/testers
• Testers can automate and validate
faster
• Everyone is less frustrated
Service
Under
Test (SUT)
Service 2
(MockServer)
Service 4
(MockServer)
Service 3
(MockServer)
Local Test Env.
Build UT CT
IT ST SIT Delivery
Build Validate
Dev
Stream
SDET
Stream
Promote
Push
Promote
DEV Environments
INT Environment SYS Environment
Push
QA Environment
1. Test the right thing in the right environment
If a test fails here it we know for
a fact that it will be because of:
1. The environment
2. A service not following its API
contract, or
3. An actual integration bug
(+ IT + ST)
2. Implement automatic retry-on-error techniques
Some test frameworks like TestNG allow for several ways
to automatically retry failed tests, by:
1. Running “<test-outputs>testng-failed.xml” after a run with failed
tests [Good]
2. Run TestNG programmatically and implement a “retry test”
transformer to the @Test annotation [Great!]
3. If you don’t want to retry all, you can create a @Retry annotation
to implement the “retry test” listener [Great!]
2. Implement automatic retry-on-error techniques (option 2)
2. Implement automatic retry-on-error techniques (option 2)
2. Implement automatic retry-on-error techniques (option 2)
3. Use plugins in your CI to detect flaky tests
https://wiki.jenkins-ci.org/display/JENKINS/Test+Results+Analyzer+Plugin
Challenges of test automation
3. Doing test automation for the
wrong reasons
1. “I don’t have to test anything manually”
Manual Testing
is CRUCIAL in
Agile because
machines cannot
think outside the
box …Yet ;)
“Agile Testing: A Practical Guide for Testers and Agile Teams”, Lisa Crispin & Janet Gregory
2. Testing as a separated work flow
Build UT
IT/ST OKBuild
DEV Environments
QA Environment
Dev
Stream
Test
Stream
FAIL
“Ready
to Test”
Development “DONE”
Tester waits for
changes to be
deployed and runs
tests
Development “STARTS”
Tester provides FEEDBACK
Not understanding WHY we need test automation
Build UT CT
IT ST SIT Delivery
Build Validate
Promote
Push
Promote
DEV Environments
INT Environment SYS Environment
Push
QA Environment
Build
Pipeline
Test
Stream
The goal of test automation is to provide FEEDBACK
Test Automation should be another form of delivery
Not understanding WHY we need test automation
• Feedback MUST be:
• Reliable (no false positives/negatives)
• Fast (as early as possible)
• Scalable (keeps being fast when growing)
• Runnable by anyone
• It’s a WHOLE TEAM thing
Challenges of test automation
4. Building Vs. Using
Building vs. Using
• Now that testers are toolsmiths, they can also be
infected by the ”building-everything-from-scratch”
disease
• There is a tool for pretty much anything. If there is not,
search again (at least you should find a starting point)
• A good tester should always try to find the right tool
for the right kind of testing
● In 2014 we had to send 10k+ HTML emails to our users
● HTML emails are inconsistently rendered by different email clients:
• Some do not support HTML at all,
• Some do not render it consistently with W3C specifications
● You never know which email client your users are using
• Desktop clients: Outlook 2002, 2013, Thunderbird
• Web clients: GMail (in Chrome, FF, IE), Yahoo! Mail, etc
● You cannot easily automate if email looks good, but you can automate
rendering in ~10 mail clients
Example
Example (cont)
Building Test Frameworks
How can a Test Framework be USEFUL?
1. Easy to Write Tests
when().
get("/store").
then().
body("store.book.findAll { it.price < 10 }.title",
hasItems("Sayings of the Century",
”Moby Dick"));
when().
get("/store").
then().
body("store.book.author.collect { it.length()}.sum()",
greaterThan(50));
library: RestAssured
https://github.com/rest-assured/rest-assured
2. Reporting
library: cucumber-reporting
https://github.com/damianszczepanik/cucumber-reporting
2. Reporting
library: logging-selenium
http://loggingselenium.sourceforge.net/usage.html
3. Debugging
Library: curl-logger
(Maciej Gawinecki)
http://nomoretesting.logdown.com/
https://github.com/dzieciou/curl-logger
curl 'http://google.com/' -H 'Accept: */*' -H
'Content-Length: 0' -H 'Host: google.com'
-H 'Connection: Keep-Alive' -H 'User-
Agent: Apache-HttpClient/4.5.1
(Java/1.8.0_45)' --compressed --insecure
--verbose
4. Transparent X-Platform & X-Browser Testing
Libraries:
WebDriver
http://www.seleniumhq.org
appium
https://github.com/appium/appium
5. Project Management Tool Integration
Zephyr API (ZAPI)
http://docs.getzephyr.apiary.io
Challenges of test automation
5. Thinking that end-to-end test
automation solves everything
Typical scenario of relying in end-to-end test automation
Example: the pain of a real delivery pipeline
Days Left Pass
%
Notes
1 5% Everything is broken! Signing in to the service is broken... Almost all
tests sign in a user so almost all tests failed.
0 4% A partner team we rely on deployed a bad build to their testing
environment yesterday.
-1 54% A dev broke the save scenario yesterday (or the day before?). Half the
tests save a document at some point in time. Devs spent most of the
day determining if it's a frontend bug or a backend bug
-2 54% It's a frontend bug, devs spent half of today figuring out where
-3 54% A bad fix was checked in yesterday. The mistake was pretty easy to
spot, though, and a correct fix was checked in today.
http://googletesting.blogspot.com/2015/04/just-say-no-to-more-end-to-end-tests.html
Example: the pain of a real delivery pipeline
Days
Left
Pass % Notes
-4 1% Hardware failures occurred in the lab for our testing environment.
-5 84% Many small bugs hiding behind the big bugs (e.g., sign-in broken, save
broken). Still working on the small bugs.
-6 87% We should be above 90%, but are not for some reason.
-7 89.54% (Rounds up to 90%, close enough.) No fixes were checked in
yesterday, so the tests must have been flaky yesterday
http://googletesting.blogspot.com/2015/04/just-say-no-to-more-end-to-end-tests.html
©2016 AKAMAI | FASTER FORWARDTM
Problems with end-to-end tests
● Long
• Developers need to wait long for feedback about their changes
● Flaky
• Sensitive to environment and subsystems failures, timeouts, etc.
• Reduce the developer's trust in test, as a result flaky tests are often ignored
● Hard to isolate root cause
• Developers need to find the specific lines of code causing the bug
• For >1M LOC it's like trying to find a needle in a haystack.
http://googletesting.blogspot.com/2015/04/just-say-no-to-more-end-to-end-tests.html
©2016 AKAMAI | FASTER FORWARDTM
How to address it? Do end-to-end only if necessary!
Maintenance
Slower tests
Flakiness
Move Fast & Don't Break Things, GTAC 2014
©2016 AKAMAI | FASTER FORWARDTM
Cooling down
Summary
©2016 AKAMAI | FASTER FORWARDTM
Summary
• Test Automation is a form of development and
should be treated as such
• It also suffers the same problems
• Think of WHY you need it before doing it
• There are plenty of tools and libraries to make it
more useful
Thank You!
Feel free to send questions to jdelbarc@akamai.com

More Related Content

Creating testing tools to support development

  • 1. Creating Testing Tools to Support Development Chema del Barco SDET Manager, Akamai Technologies
  • 2. Warming up How many of you… … Work as dedicated testers? … Create automated tests? … Think you have enough tests? … Rely on automated tests from testers? … Trust these automated tests?
  • 3. Warming up How do we use test automation at Akamai? • Different products & systems • Java, JS, Ruby, Python… • A lot of backend, less frontend • Part of workflow • We also do manual testing • Lots of challenges
  • 4. Challenges of test automation Challenges of Test Automation
  • 5. Challenges of test automation 1. Not testing the right thing
  • 6. Not testing the right thing Real Life Example: Volvo’s Automatic Collision Avoidance (2010)
  • 7. Not testing the right thing https://www.youtube.com/watch?v=aNi17YLnZpg
  • 8. Not testing the right thing - lessons learned 1. Everybody makes mistakes 2. There is no such thing as 100% test coverage 3. There are always risks involved in any delivery 4. Testing can tell us where these risks are 5. We can mitigate risks if we know them
  • 9. Not testing the right thing Some failed demos later (2014)…
  • 10. Not testing the right thing https://www.youtube.com/watch?v=kWiwS-43xpk
  • 11. Not testing the right thing How can we mitigate this?
  • 12. Not testing the right thing Don’t lose the user’s point of view PO/Testers can use BDD / Spec by Example
  • 13. Not testing the right thing Developers can use TDD
  • 14. Not testing the right thing Or you can all work together to use ATDD!
  • 15. Not testing the right thing Alternative approach: Use Domain Abstractions (ex: Actors & Actions)
  • 16. Challenges of test automation 2. The Test Environment Hell
  • 17. The Test Environment Hell • Complex applications tend to rely on big Integrated Environments for validating changes • Environments are so complex that require constant troubleshooting and maintenance • Testers (and builds) are often blocked and teams spend a lot of time investigating if problem is app or environment related
  • 18. Build UT IT ST Delivery Build Validate Dev Stream SDET Stream UT – Unit Testing Promote – Deploy to Next CT – Component Testing Test Environment IT – Integration Testing Push – Manual code push ST – System Testing SIT – System Integration Testing Promote Push DEV Environment INT Environment Push Ex: Akamai’s Luna Control Center test environments
  • 19. Build UT IT ST Delivery Build Validate Dev Stream SDET Stream UT – Unit Testing Promote – Deploy to Next CT – Component Testing Test Environment IT – Integration Testing Push – Manual code push ST – System Testing SIT – System Integration Testing Promote Push DEV Environment INT Environment Push Ex: Akamai’s Luna Control Center test environments
  • 20. The Test Environment Hell How can we mitigate this?
  • 21. 1. Test the right thing in the right environment “Separation of Concerns”: 1. Test that your change works, in isolation (UT) 2. Test that your app works with your change, in isolation (CT / IT) 3. Test that your app works with your change, in integration (ST) 4. Test that your system works in integration (SIT)
  • 22. 1. Test the right thing in the right environment • Isolated Component (Service) Testing • Exclusive for Team • MockServer to mock external dependencies (provided by the service owners) • SUT thinks it’s in production • Can test also Integration (Contract) • Very stable, maintained by devs/testers • Testers can automate and validate faster • Everyone is less frustrated Service Under Test (SUT) Service 2 (MockServer) Service 4 (MockServer) Service 3 (MockServer) Local Test Env.
  • 23. Build UT CT IT ST SIT Delivery Build Validate Dev Stream SDET Stream Promote Push Promote DEV Environments INT Environment SYS Environment Push QA Environment 1. Test the right thing in the right environment If a test fails here it we know for a fact that it will be because of: 1. The environment 2. A service not following its API contract, or 3. An actual integration bug (+ IT + ST)
  • 24. 2. Implement automatic retry-on-error techniques Some test frameworks like TestNG allow for several ways to automatically retry failed tests, by: 1. Running “<test-outputs>testng-failed.xml” after a run with failed tests [Good] 2. Run TestNG programmatically and implement a “retry test” transformer to the @Test annotation [Great!] 3. If you don’t want to retry all, you can create a @Retry annotation to implement the “retry test” listener [Great!]
  • 25. 2. Implement automatic retry-on-error techniques (option 2)
  • 26. 2. Implement automatic retry-on-error techniques (option 2)
  • 27. 2. Implement automatic retry-on-error techniques (option 2)
  • 28. 3. Use plugins in your CI to detect flaky tests https://wiki.jenkins-ci.org/display/JENKINS/Test+Results+Analyzer+Plugin
  • 29. Challenges of test automation 3. Doing test automation for the wrong reasons
  • 30. 1. “I don’t have to test anything manually” Manual Testing is CRUCIAL in Agile because machines cannot think outside the box …Yet ;) “Agile Testing: A Practical Guide for Testers and Agile Teams”, Lisa Crispin & Janet Gregory
  • 31. 2. Testing as a separated work flow Build UT IT/ST OKBuild DEV Environments QA Environment Dev Stream Test Stream FAIL “Ready to Test” Development “DONE” Tester waits for changes to be deployed and runs tests Development “STARTS” Tester provides FEEDBACK
  • 32. Not understanding WHY we need test automation Build UT CT IT ST SIT Delivery Build Validate Promote Push Promote DEV Environments INT Environment SYS Environment Push QA Environment Build Pipeline Test Stream The goal of test automation is to provide FEEDBACK Test Automation should be another form of delivery
  • 33. Not understanding WHY we need test automation • Feedback MUST be: • Reliable (no false positives/negatives) • Fast (as early as possible) • Scalable (keeps being fast when growing) • Runnable by anyone • It’s a WHOLE TEAM thing
  • 34. Challenges of test automation 4. Building Vs. Using
  • 35. Building vs. Using • Now that testers are toolsmiths, they can also be infected by the ”building-everything-from-scratch” disease • There is a tool for pretty much anything. If there is not, search again (at least you should find a starting point) • A good tester should always try to find the right tool for the right kind of testing
  • 36. ● In 2014 we had to send 10k+ HTML emails to our users ● HTML emails are inconsistently rendered by different email clients: • Some do not support HTML at all, • Some do not render it consistently with W3C specifications ● You never know which email client your users are using • Desktop clients: Outlook 2002, 2013, Thunderbird • Web clients: GMail (in Chrome, FF, IE), Yahoo! Mail, etc ● You cannot easily automate if email looks good, but you can automate rendering in ~10 mail clients Example
  • 38. Building Test Frameworks How can a Test Framework be USEFUL?
  • 39. 1. Easy to Write Tests when(). get("/store"). then(). body("store.book.findAll { it.price < 10 }.title", hasItems("Sayings of the Century", ”Moby Dick")); when(). get("/store"). then(). body("store.book.author.collect { it.length()}.sum()", greaterThan(50)); library: RestAssured https://github.com/rest-assured/rest-assured
  • 42. 3. Debugging Library: curl-logger (Maciej Gawinecki) http://nomoretesting.logdown.com/ https://github.com/dzieciou/curl-logger curl 'http://google.com/' -H 'Accept: */*' -H 'Content-Length: 0' -H 'Host: google.com' -H 'Connection: Keep-Alive' -H 'User- Agent: Apache-HttpClient/4.5.1 (Java/1.8.0_45)' --compressed --insecure --verbose
  • 43. 4. Transparent X-Platform & X-Browser Testing Libraries: WebDriver http://www.seleniumhq.org appium https://github.com/appium/appium
  • 44. 5. Project Management Tool Integration Zephyr API (ZAPI) http://docs.getzephyr.apiary.io
  • 45. Challenges of test automation 5. Thinking that end-to-end test automation solves everything
  • 46. Typical scenario of relying in end-to-end test automation
  • 47. Example: the pain of a real delivery pipeline Days Left Pass % Notes 1 5% Everything is broken! Signing in to the service is broken... Almost all tests sign in a user so almost all tests failed. 0 4% A partner team we rely on deployed a bad build to their testing environment yesterday. -1 54% A dev broke the save scenario yesterday (or the day before?). Half the tests save a document at some point in time. Devs spent most of the day determining if it's a frontend bug or a backend bug -2 54% It's a frontend bug, devs spent half of today figuring out where -3 54% A bad fix was checked in yesterday. The mistake was pretty easy to spot, though, and a correct fix was checked in today. http://googletesting.blogspot.com/2015/04/just-say-no-to-more-end-to-end-tests.html
  • 48. Example: the pain of a real delivery pipeline Days Left Pass % Notes -4 1% Hardware failures occurred in the lab for our testing environment. -5 84% Many small bugs hiding behind the big bugs (e.g., sign-in broken, save broken). Still working on the small bugs. -6 87% We should be above 90%, but are not for some reason. -7 89.54% (Rounds up to 90%, close enough.) No fixes were checked in yesterday, so the tests must have been flaky yesterday http://googletesting.blogspot.com/2015/04/just-say-no-to-more-end-to-end-tests.html
  • 49. ©2016 AKAMAI | FASTER FORWARDTM Problems with end-to-end tests ● Long • Developers need to wait long for feedback about their changes ● Flaky • Sensitive to environment and subsystems failures, timeouts, etc. • Reduce the developer's trust in test, as a result flaky tests are often ignored ● Hard to isolate root cause • Developers need to find the specific lines of code causing the bug • For >1M LOC it's like trying to find a needle in a haystack. http://googletesting.blogspot.com/2015/04/just-say-no-to-more-end-to-end-tests.html
  • 50. ©2016 AKAMAI | FASTER FORWARDTM How to address it? Do end-to-end only if necessary! Maintenance Slower tests Flakiness Move Fast & Don't Break Things, GTAC 2014
  • 51. ©2016 AKAMAI | FASTER FORWARDTM Cooling down Summary
  • 52. ©2016 AKAMAI | FASTER FORWARDTM Summary • Test Automation is a form of development and should be treated as such • It also suffers the same problems • Think of WHY you need it before doing it • There are plenty of tools and libraries to make it more useful
  • 53. Thank You! Feel free to send questions to jdelbarc@akamai.com