Writing and maintaining a suite of acceptance tests that can give you a high level of confidence in the behaviour and configuration of your system is a complex task. In this session, Dave will describe approaches to acceptance testing that allow teams to:
work quickly and effectively
build excellent functional coverage for complex enterprise-scale systems
manage and maintain those tests in the face of change, and of evolution in both the codebase and the understanding of the business problem.
This workshop will answer the following questions, and more:
How do you fail fast?
How do you make your testing scalable?
How do you isolate test cases from one-another?
How do you maintain a working body of tests when you radically change the interface to your system?
More details:
https://confengine.com/agile-india-2019/proposal/8539/acceptance-testing-for-continuous-delivery
Conference link: https://2019.agileindia.org
Report
Share
Report
Share
1 of 191
Download to read offline
More Related Content
Acceptance Testing for Continuous Delivery by Dave Farley at #AgileIndia2019
2. (C)opyright Dave Farley 2017
The Role of Acceptance Testing
Local Dev. Env.
Source
Repository
3. (C)opyright Dave Farley 2017
The Role of Acceptance Testing
Artifact
Repository
Local Dev. Env.
Deployment Pipeline
Commit
Production Env.
Deployment
App.
Commit
Acceptance
Manual
Perf1
Perf2
Staged
Production
Source
Repository
Acceptance
Component
Performance
System
Performance
Staging Env.
Deployment
App.
Manual Test Env.
Deployment
App.
4. (C)opyright Dave Farley 2017
The Role of Acceptance Testing
Artifact
Repository
Local Dev. Env.
Deployment Pipeline
Commit
Production Env.
Deployment
App.
Commit
Acceptance
Manual
Perf1
Perf2
Staged
Production
Source
Repository
Acceptance
Component
Performance
System
Performance
Staging Env.
Deployment
App.
Manual Test Env.
Deployment
App.
Staging Env.
Deployment
App.
Manual Test Env.
Deployment
App.
Component
Performance
System
Performance
Acceptance
5. (C)opyright Dave Farley 2017
The Role of Acceptance Testing
Artifact
Repository
Local Dev. Env.
Deployment Pipeline
Commit
Production Env.
Deployment
App.
Commit
Acceptance
Manual
Perf1
Perf2
Staged
Production
Source
Repository
Acceptance
Component
Performance
System
Performance
Staging Env.
Deployment
App.
Manual Test Env.
Deployment
App.
Staging Env.
Deployment
App.
Manual Test Env.
Deployment
App.
Component
Performance
System
Performance
Acceptance
6. What is Acceptance Testing?
Asserts that the code does what the users want.
8. What is Acceptance Testing?
Asserts that the code works in a “production-like”
test environment.
9. What is Acceptance Testing?
A test of the deployment and configuration of a
whole system.
10. What is Acceptance Testing?
Provides timely feedback on stories - closes a
feedback loop.
11. What is Acceptance Testing?
Acceptance Testing, ATDD, BDD, Specification by
Example, Executable Specifications.
12. (C)opyright Dave Farley 2017
What is Acceptance Testing?
A Good Acceptance Test is:
An Executable Specification of
the Behaviour of the System
13. (C)opyright Dave Farley 2017
What is Acceptance Testing?
Unit Test CodeIdea
Executable
spec.
Build Release
14. (C)opyright Dave Farley 2017
What is Acceptance Testing?
Unit Test CodeIdea
Executable
spec.
Build Release
15. (C)opyright Dave Farley 2017
What is Acceptance Testing?
Unit Test CodeIdea
Executable
spec.
Build Release
16. (C)opyright Dave Farley 2017
The Problem:
There is often a disconnect between what users
want of a system and what the software delivers.
17. (C)opyright Dave Farley 2017
The Problem:
There is often a disconnect between what users
want of a system and what the software delivers.
18. (C)opyright Dave Farley 2017
The Problem:
Identifying user-need and designing a solution
are two VERY difficult problems
Let’s Try to NOT solve both
at the same time!
19. (C)opyright Dave Farley 2017
The Problem:
Identifying user-need and designing a solution
are two VERY difficult problems
Let’s Try to NOT solve both
at the same time!
23. (C)opyright Dave Farley 2017
The Problem:
In Most Orgs, Functional Testing is Manual
Slow, Low-Quality,
Expensive, Unreliable,
Error Prone,
Fragile,
Results Hard to Understand,
…
24. (C)opyright Dave Farley 2017
The Problem:
Automated Functional Tests Often Tightly-
Coupled To SUT
25. (C)opyright Dave Farley 2017
The Problem:
Automated Functional Tests Often Tightly-
Coupled To SUT
26. (C)opyright Dave Farley 2017
The Problem:
Automated Functional Tests Often Tightly-
Coupled To SUT
Slow to Develop,
Low-Quality, Expensive,
Unreliable,
Error Prone,
Fragile,
…
27. (C)opyright Dave Farley 2017
The Problem:
So if there is a disconnect, how can we bridge
that gap?
28. (C)opyright Dave Farley 2017
The Problem:
So if there is a disconnect, how can we bridge
that gap?
29. (C)opyright Dave Farley 2017
The Solution:
Establish a common shared language for
expressing a user’s need
30. (C)opyright Dave Farley 2017
The Solution:
Establish a common shared language for
expressing a user’s need
31. (C)opyright Dave Farley 2017
Technique:
Always capture any requirement from the
perspective of “an external user of the system”
32. (C)opyright Dave Farley 2017
Technique:
Always capture any requirement from the
perspective of “an external user of the system”
39. (C)opyright Dave Farley 2017
123 Some Story S
As a user I want some behaviour
so that I can achieve some benefit
Acceptance
Criteria
Story Templates
▪ A result that will indicate that the benefit is achieved
▪ Another result to confirm the benefit
40. (C)opyright Dave Farley 2017
123 Some Story S
As a user I want some behaviour
so that I can achieve some benefit
Acceptance
Criteria
Story Templates
▪ A result that will indicate that the benefit is achieved
▪ Another result to confirm the benefit
Test Case 1a..n
41. (C)opyright Dave Farley 2017
123 Some Story S
As a user I want some behaviour
so that I can achieve some benefit
Acceptance
Criteria
Story Templates
▪ A result that will indicate that the benefit is achieved
▪ Another result to confirm the benefit
Test Case 1a..n
Test Case 2a..n
42. (C)opyright Dave Farley 2017
Technique:
Make the Team’s “Definition of Done” = Minimum
of 1 Acceptance Test for each Acceptance
Criteria
43. (C)opyright Dave Farley 2017
Technique:
Make the Team’s “Definition of Done” = Minimum
of 1 Acceptance Test for each Acceptance
Criteria
44. (C)opyright Dave Farley 2017
Technique:
Imagine the least technical person, who
understands the problem domain, reading the
spec - It should make sense to them!
45. (C)opyright Dave Farley 2017
Technique:
Imagine the least technical person, who
understands the problem domain, reading the
spec - It should make sense to them!
46. (C)opyright Dave Farley 2017
Technique:
Avoid “Technical Stories”, Always find the
fundamental User need
47. (C)opyright Dave Farley 2017
Technique:
Avoid “Technical Stories”, Always find the
fundamental User need
48. (C)opyright Dave Farley 2017
Technique:
Make each story, each specification as small as
possible
49. (C)opyright Dave Farley 2017
Technique:
Make each story, each specification as small as
possible
Small is Good!
50. (C)opyright Dave Farley 2015
So What’s So Hard?
• Tests break when the SUT changes (Particularly UI)
• This is a problem of design, the tests are too
tightly-coupled to the SUT!
• The history is littered with poor implementations:
• UI Record-and-playback Systems
• Record-and-playback of production data
• Dumps of production data to test systems
• Nasty automated testing products.
51. (C)opyright Dave Farley 2015
So What’s So Hard?
• Tests break when the SUT changes (Particularly UI)
• This is a problem of design, the tests are too
tightly-coupled to the SUT!
• The history is littered with poor implementations:
• UI Record-and-playback Systems
• Record-and-playback of production data
• Dumps of production data to test systems
• Nasty automated testing products.
Anti-Pattern!
Anti-Pattern!
Anti-Pattern!
Anti-Pattern!
52. (C)opyright Dave Farley 2015
Who Owns the Tests?
• Anyone can write a test
• Developers are the people that will break tests
• Therefore Developers own the responsibility to
keep them working
• Separate Testing/QA team owning automated
tests
53. (C)opyright Dave Farley 2015
Who Owns the Tests?
• Anyone can write a test
• Developers are the people that will break tests
• Therefore Developers own the responsibility to
keep them working
• Separate Testing/QA team owning automated
tests Anti-Pattern!
55. (C)opyright Dave Farley 2015
Properties of Good Acceptance Tests
• “What” not “How”
• Isolated from other tests
• Repeatable
• Uses the language of the problem domain
• Tests ANY change
• Efficient
56. (C)opyright Dave Farley 2015
Properties of Good Acceptance Tests
• “What” not “How”
• Isolated from other tests
• Repeatable
• Uses the language of the problem domain
• Tests ANY change
• Efficient
57. (C)opyright Dave Farley 2015
Public API FIX API
Trade
Reporting
Gateway
…
“What” not “How”
API Traders
Clearing
Destination
Other
external
end-points
Market
Makers
UI
Traders
58. (C)opyright Dave Farley 2015
Public API FIX API
Trade
Reporting
Gateway
…
“What” not “How”
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
59. (C)opyright Dave Farley 2015
Public API FIX API
Trade
Reporting
Gateway
…FIX API
“What” not “How”
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
60. (C)opyright Dave Farley 2015
Public API FIX API
Trade
Reporting
Gateway
…
“What” not “How”
API Traders
Clearing
Destination
Other
external
end-points
Market
Makers
UI
Traders
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
61. (C)opyright Dave Farley 2015
Public API FIX API
Trade
Reporting
Gateway
…
“What” not “How”
FIX API
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
API
External
Stubs
FIX-APIUI FIX-APIFIX-API
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
62. (C)opyright Dave Farley 2015
Public API FIX API
Trade
Reporting
Gateway
…
“What” not “How”
FIX API
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
API
External
Stubs
FIX-APIUI FIX-API
63. (C)opyright Dave Farley 2015
Public API FIX API
Trade
Reporting
Gateway
…
“What” not “How”
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
Test
Case
API
External
Stubs
FIX-APIUI FIX-API
64. (C)opyright Dave Farley 2015
Public API FIX API
Trade
Reporting
Gateway
…
“What” not “How”
API
External
Stubs
FIX-APIUI FIX-API
Test infrastructure common to all acceptance tests
65. (C)opyright Dave Farley 2015
“What” not “How” - Separate Deployment from Testing
• Every Test should control its start conditions,
and so should start and init the app.
• Acceptance Test deployment should be a
rehearsal for Production Release
• This separation of concerns provides an
opportunity for optimisation
• Parallel tests in a shared environment
• Lower test start-up overhead
66. (C)opyright Dave Farley 2015
“What” not “How” - Separate Deployment from Testing
• Every Test should control its start conditions,
and so should start and init the app.
• Acceptance Test deployment should be a
rehearsal for Production Release
• This separation of concerns provides an
opportunity for optimisation
• Parallel tests in a shared environment
• Lower test start-up overhead
Anti-Pattern!
67. (C)opyright Dave Farley 2015
Properties of Good Acceptance Tests
• “What” not “How”
• Isolated from other tests
• Repeatable
• Uses the language of the problem domain
• Tests ANY change
• Efficient
68. (C)opyright Dave Farley 2015
Properties of Good Acceptance Tests
• “What” not “How”
• Isolated from other tests
• Repeatable
• Uses the language of the problem domain
• Tests ANY change
• Efficient
69. (C)opyright Dave Farley 2015
Test Isolation
• Any form of testing is about evaluating
something in controlled circumstances
• Isolation works on multiple levels
• Isolating the System under test
• Isolating test cases from each other
• Isolating test cases from themselves (temporal isolation)
• Isolation is a vital part of your Test Strategy
71. (C)opyright Dave Farley 2015
Test Isolation - Isolating the System Under Test
External System
‘A’
External System
‘C’
System Under Test
‘B’
72. (C)opyright Dave Farley 2015
Test Isolation - Isolating the System Under Test
External System
‘A’
External System
‘C’
System Under Test
‘B’
73. (C)opyright Dave Farley 2015
Test Isolation - Isolating the System Under Test
External System
‘A’
External System
‘C’
System Under Test
‘B’
74. (C)opyright Dave Farley 2015
Test Isolation - Isolating the System Under Test
External System
‘A’
External System
‘C’
System Under Test
‘B’
?
75. (C)opyright Dave Farley 2015
Test Isolation - Isolating the System Under Test
External System
‘A’
External System
‘C’
System Under Test
‘B’Anti-Pattern!
76. (C)opyright Dave Farley 2015
Test Isolation - Isolating the System Under Test
System Under Test
‘B’
Test Cases
Verifiable
Output
78. (C)opyright Dave Farley 2015
Test Isolation - Validating The Interfaces
External System
‘A’
External System
‘C’
System Under Test
‘B’
79. (C)opyright Dave Farley 2015
Test Isolation - Validating The Interfaces
External System
‘A’
External System
‘C’
System Under Test
‘B’
80. (C)opyright Dave Farley 2015
Test Isolation - Validating The Interfaces
External System
‘A’
External System
‘C’
Test Cases
Verifiable
Output
System Under Test
‘B’
Test Cases
Verifiable
Output
Test Cases
Verifiable
Output
81. (C)opyright Dave Farley 2015
Test Isolation - Validating The Interfaces
External System
‘A’
External System
‘C’
Test Cases
Verifiable
Output
System Under Test
‘B’
Test Cases
Verifiable
Output
Test Cases Verifiable
Output
82. (C)opyright Dave Farley 2015
Test Isolation - Isolating Test Cases
(Assuming multi-user systems…)
• Tests should be efficient - We want to run LOTS!
• What we really want is to deploy once, and run LOTS of
tests
• So we must avoid ANY dependencies between tests…
• Use natural functional isolation e.g.
• If testing Amazon, create a new account and a new book/product for
every test-case
• If testing eBay create a new account and a new auction for every test-
case
• If testing GitHub, create a new account and a new repository for every
test-case
• …
83. (C)opyright Dave Farley 2015
• We want repeatable results
• If I run my test-case twice it should work both
times
Test Isolation - Temporal Isolation
84. (C)opyright Dave Farley 2015
• We want repeatable results
• If I run my test-case twice it should work both
times
Test Isolation - Temporal Isolation
def test_should_place_an_order(self):
self.store.createBook(“Continuous Delivery”);
order = self.store.placeOrder(book=“Continuous Delivery")
self.store.assertOrderPlaced(order)
85. (C)opyright Dave Farley 2015
• We want repeatable results
• If I run my test-case twice it should work both
times
Test Isolation - Temporal Isolation
def test_should_place_an_order(self):
self.store.createBook(“Continuous Delivery”);
order = self.store.placeOrder(book=“Continuous Delivery")
self.store.assertOrderPlaced(order)
86. (C)opyright Dave Farley 2015
• We want repeatable results
• If I run my test-case twice it should work both
times
Test Isolation - Temporal Isolation
def test_should_place_an_order(self):
self.store.createBook(“Continuous Delivery”);
order = self.store.placeOrder(book=“Continuous Delivery")
self.store.assertOrderPlaced(order)
87. (C)opyright Dave Farley 2015
• We want repeatable results
• If I run my test-case twice it should work both
times
Test Isolation - Temporal Isolation
def test_should_place_an_order(self):
self.store.createBook(“Continuous Delivery”);
order = self.store.placeOrder(book=“Continuous Delivery")
self.store.assertOrderPlaced(order)
Continuous Delivery
88. (C)opyright Dave Farley 2015
• We want repeatable results
• If I run my test-case twice it should work both
times
Test Isolation - Temporal Isolation
def test_should_place_an_order(self):
self.store.createBook(“Continuous Delivery”);
order = self.store.placeOrder(book=“Continuous Delivery")
self.store.assertOrderPlaced(order)
Continuous Delivery
89. (C)opyright Dave Farley 2015
• We want repeatable results
• If I run my test-case twice it should work both
times
Test Isolation - Temporal Isolation
def test_should_place_an_order(self):
self.store.createBook(“Continuous Delivery”);
order = self.store.placeOrder(book=“Continuous Delivery")
self.store.assertOrderPlaced(order)
Continuous Delivery1234
90. (C)opyright Dave Farley 2015
• We want repeatable results
• If I run my test-case twice it should work both
times
Test Isolation - Temporal Isolation
def test_should_place_an_order(self):
self.store.createBook(“Continuous Delivery”);
order = self.store.placeOrder(book=“Continuous Delivery")
self.store.assertOrderPlaced(order)
Continuous Delivery1234
Continuous Delivery6789
91. (C)opyright Dave Farley 2015
• We want repeatable results
• If I run my test-case twice it should work both
times
Test Isolation - Temporal Isolation
def test_should_place_an_order(self):
self.store.createBook(“Continuous Delivery”);
order = self.store.placeOrder(book=“Continuous Delivery")
self.store.assertOrderPlaced(order)
Continuous Delivery1234
Continuous Delivery6789
• Alias your functional isolation entities
• In your test case create account ‘Dave’ in reality the in the
test infrastructure ask the application to create account
‘Dave2938472398472’ and alias.
92. (C)opyright Dave Farley 2015
Properties of Good Acceptance Tests
• “What” not “How”
• Isolated from other tests
• Repeatable
• Uses the language of the problem domain
• Tests ANY change
• Efficient
93. (C)opyright Dave Farley 2015
Properties of Good Acceptance Tests
• “What” not “How”
• Isolated from other tests
• Repeatable
• Uses the language of the problem domain
• Tests ANY change
• Efficient
95. (C)opyright Dave Farley 2015
Repeatability - Test Doubles
External System
Local Interface
to External System
96. (C)opyright Dave Farley 2015
Repeatability - Test Doubles
External System
Local Interface
to External System
Communications
to External System
97. (C)opyright Dave Farley 2015
Repeatability - Test Doubles
External System
Local Interface
to External System
Communications
to External System
TestStub
Simulating External
System
Local Interface
to External System
98. (C)opyright Dave Farley 2015
Repeatability - Test Doubles
External System
Local Interface
to External System
Communications
to External System
TestStub
Simulating External
System
Local Interface
to External System
Production
Test
Environm
ent
kjhaskjhdkjhkjh askjhl lkjasl dkjas lkajl ajsd
lkjalskjlakjsdlkajsld j
lkajsdlkajsldkj
lkjlakjsldkjlka laskj ljl akjl kajsldijupoqwiuepoq dlkjl iu
lkajsodiuqpwouoi la
]laksjdiuqoiwuoijds
oijasodiaosidjuoiasud
kjhaskjhdkjhkjh askjhl lkjasl dkjas lkajl ajsd
lkjalskjlakjsdlkajsld j
lkajsdlkajsldkj
lkjlakjsldkjlka laskj ljl akjl kajsldijupoqwiuepoq dlkjl iu
lkajsodiuqpwouoi la
]laksjdiuqoiwuoijds
oijasodiaosidjuoiasud
Configuration
99. (C)opyright Dave Farley 2015
Test Doubles As Part of Test Infrastructure
TestStub
Simulating External
System
Local Interface
to External System
100. (C)opyright Dave Farley 2015
Test Doubles As Part of Test Infrastructure
TestStub
Simulating External
System
Local Interface
to External System
101. (C)opyright Dave Farley 2015
Test Doubles As Part of Test Infrastructure
TestStub
Simulating External
System
Local Interface
to External System
Public Interface
102. (C)opyright Dave Farley 2015
Test Doubles As Part of Test Infrastructure
TestStub
Simulating External
System
Local Interface
to External System
Public Interface
103. (C)opyright Dave Farley 2015
Test Doubles As Part of Test Infrastructure
TestStub
Simulating External
System
Local Interface
to External System
Test Infrastructure
Test
Case
Test
Case
Test
Case
Test
Case
Test Infrastructure
Back-Channel
Public Interface
System
UnderTest
104. (C)opyright Dave Farley 2015
Properties of Good Acceptance Tests
• “What” not “How”
• Isolated from other tests
• Repeatable
• Uses the language of the problem domain
• Tests ANY change
• Efficient
105. (C)opyright Dave Farley 2015
Properties of Good Acceptance Tests
• “What” not “How”
• Isolated from other tests
• Repeatable
• Uses the language of the problem domain
• Tests ANY change
• Efficient
106. (C)opyright Dave Farley 2015
Language of the Problem Domain - DSL
• A Simple ‘DSL’ Solves many of our problems
• Ease of TestCase creation
• Readability
• Ease of Maintenance
• Separation of “What” from “How”
• Test Isolation
• The Chance to abstract complex set-up and scenarios
• …
107. (C)opyright Dave Farley 2015
Language of the Problem Domain - DSL
• A Simple ‘DSL’ Solves many of our problems
• Ease of TestCase creation
• Readability
• Ease of Maintenance
• Separation of “What” from “How”
• Test Isolation
• The Chance to abstract complex set-up and scenarios
• …
BDD
Behaviour Driven Development
108. (C)opyright Dave Farley 2015
Language of the Problem Domain - External DSL
ThoughtWorks’ TWIST
|eg.Division|
|numerator|denominator|quotient?|
|10 |2 |5 |
|12.6 |3 |4.2 |
|100 |4 |33 |
FITnesse
Narrative:
In order to communicate effectively to the business some functionality
As a development team
I want to use Behaviour-Driven Development
Lifecycle:
Before:
Given a step that is executed before each scenario
After:
Outcome: ANY
Given a step that is executed after each scenario regardless of outcome
Outcome: SUCCESS
Given a step that is executed after each successful scenario
Outcome: FAILURE
Given a step that is executed after each failed scenario
Scenario: A scenario is a collection of executable steps of different type
Given step represents a precondition to an event
When step represents the occurrence of the event
Then step represents the outcome of the event
Examples:
|precondition|be-captured|
|abc|be captured |
|xyz|not be captured|
JBehave
Cucumber - Gherkin
109. (C)opyright Dave Farley 2015
Language of the Problem Domain - Internal DSL
EasyB - Groovy
given "an invalid zip code", {
invalidzipcode = "221o1"
}
and "given the zipcodevalidator is initialized", {
zipvalidate = new ZipCodeValidator()
}
when "validate is invoked with the invalid zip code", {
value = zipvalidate.validate(invalidzipcode)
}
then "the validator instance should return false", {
value.shouldBe false
}
My Homebrew - Java
@Channel(fixApi, dealTicket, publicApi)
@Test
public void shouldSuccessfullyPlaceAnImmediateOrCancelBuyMarketOrder()
{
trading.placeOrder("instrument", "side: buy", “price: 123.45”, "quantity: 4", "goodUntil: Immediate”);
trading.waitForExecutionReport("executionType: Fill", "orderStatus: Filled",
"side: buy", "quantity: 4", "matched: 4", "remaining: 0",
"executionPrice: 123.45", "executionQuantity: 4");
}
My Homebrew - Pythonfrom acceptance_test_dsl.acc_test_dsl import AccTestDsl
class PlaceOrderTest(AccTestDsl):
def setUp(self):
AccTestDsl.setUp(self)
self.add_channel("TRADING")
self.add_end_point("MQ")
def test_should_successfully_place_market_order(self):
order = self.trading.placeOrder(symbol="ZVZZT",
orderType="Market",
qty=300, TimeInForce='IOC')
self.trading.assertOrderPlaced(order)
110. (C)opyright Dave Farley 2015
Language of the Problem Domain - DSL
@Test
public void shouldSupportPlacingValidBuyAndSellLimitOrders()
{
trading.selectDealTicket("instrument");
trading.dealTicket.placeOrder("type: limit", ”bid: 4@10”);
trading.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to buy 4.00 contracts at 10.0");
trading.dealTicket.dismissFeedbackMessage();
trading.dealTicket.placeOrder("type: limit", ”ask: 4@9”);
trading.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to sell 4.00 contracts at 9.0");
}
111. (C)opyright Dave Farley 2015
Language of the Problem Domain - DSL
@Test
public void shouldSupportPlacingValidBuyAndSellLimitOrders()
{
trading.selectDealTicket("instrument");
trading.dealTicket.placeOrder("type: limit", ”bid: 4@10”);
trading.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to buy 4.00 contracts at 10.0");
trading.dealTicket.dismissFeedbackMessage();
trading.dealTicket.placeOrder("type: limit", ”ask: 4@9”);
trading.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to sell 4.00 contracts at 9.0");
}
@Test
public void shouldSuccessfullyPlaceAnImmediateOrCancelBuyMarketOrder()
{
fixAPIMarketMaker.placeMassOrder("instrument", "ask: 11@52", "ask: 10@51", "ask: 10@50", "bid: 10@49");
fixAPI.placeOrder("instrument", "side: buy", "quantity: 4", "goodUntil: Immediate", "allowUnmatched: true");
fixAPI.waitForExecutionReport("executionType: Fill", "orderStatus: Filled",
"side: buy", "quantity: 4", "matched: 4", "remaining: 0",
"executionPrice: 50", "executionQuantity: 4");
}
112. (C)opyright Dave Farley 2015
Language of the Problem Domain - DSL
@Test
public void shouldSupportPlacingValidBuyAndSellLimitOrders()
{
trading.selectDealTicket("instrument");
trading.dealTicket.placeOrder("type: limit", ”bid: 4@10”);
trading.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to buy 4.00 contracts at 10.0");
trading.dealTicket.dismissFeedbackMessage();
trading.dealTicket.placeOrder("type: limit", ”ask: 4@9”);
trading.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to sell 4.00 contracts at 9.0");
}
@Test
public void shouldSuccessfullyPlaceAnImmediateOrCancelBuyMarketOrder()
{
fixAPIMarketMaker.placeMassOrder("instrument", "ask: 11@52", "ask: 10@51", "ask: 10@50", "bid: 10@49");
fixAPI.placeOrder("instrument", "side: buy", "quantity: 4", "goodUntil: Immediate", "allowUnmatched: true");
fixAPI.waitForExecutionReport("executionType: Fill", "orderStatus: Filled",
"side: buy", "quantity: 4", "matched: 4", "remaining: 0",
"executionPrice: 50", "executionQuantity: 4");
}
@Before
public void beforeEveryTest()
{
adminAPI.createInstrument("name: instrument");
registrationAPI.createUser("user");
registrationAPI.createUser("marketMaker", "accountType: MARKET_MAKER");
tradingUI.loginAsLive("user");
}
113. (C)opyright Dave Farley 2015
Language of the Problem Domain - DSL
public void placeOrder(final String... args)
{
final DslParams params =
new DslParams(args,
new OptionalParam("type").setDefault("Limit").setAllowedValues("limit", "market", "StopMarket
new OptionalParam("side").setDefault("Buy").setAllowedValues("buy", "sell"),
new OptionalParam("price"),
new OptionalParam("triggerPrice"),
new OptionalParam("quantity"),
new OptionalParam("stopProfitOffset"),
new OptionalParam("stopLossOffset"),
new OptionalParam("confirmFeedback").setDefault("true"));
getDealTicketPageDriver().placeOrder(params.value("type"),
params.value("side"),
params.value("price"),
params.value("triggerPrice"),
params.value("quantity"),
params.value("stopProfitOffset"),
params.value("stopLossOffset"));
if (params.valueAsBoolean("confirmFeedback"))
{
getDealTicketPageDriver().clickOrderFeedbackConfirmationButton();
}
LOGGER.debug("placeOrder(" + Arrays.deepToString(args) + ")");
}
114. (C)opyright Dave Farley 2015
Language of the Problem Domain - DSL
public void placeOrder(final String... args)
{
final DslParams params = new DslParams(args,
new RequiredParam("instrument"),
new OptionalParam("clientOrderId"),
new OptionalParam("order"),
new OptionalParam("side").setAllowedValues("buy", "sell"),
new OptionalParam("orderType").setAllowedValues("market", "limit"),
new OptionalParam("price"),
new OptionalParam("bid"),
new OptionalParam("ask"),
new OptionalParam("symbol").setDefault("BARC"),
new OptionalParam("quantity"),
new OptionalParam("goodUntil").setAllowedValues("Immediate", "Cancelled").setDefault("Cancelled"),
new OptionalParam("allowUnmatched").setAllowedValues("true", "false").setDefault("true"),
new OptionalParam("possibleResend").setAllowedValues("true", "false").setDefault("false"),
new OptionalParam("unauthorised").setAllowedValues("true"),
new OptionalParam("brokeredAccountId"),
new OptionalParam("msgSeqNum"),
new OptionalParam("orderCapacity").setAllowedValues("AGENCY", "PRINCIPAL", "").setDefault("PRINCIPA
new OptionalParam("accountType").setAllowedValues("CUSTOMER", "HOUSE", "").setDefault("HOUSE"),
new OptionalParam("accountClearingReference"),
new OptionalParam("expectedOrderRejectionStatus").setAllowedValues("TOO_LATE_TO_ENTER",
"BROKER_EXCHANGE_OPTION",
"UNKNOWN_SYMBOL",
"DUPLICATE_ORDER"),
new OptionalParam("expectedOrderRejectionReason").setAllowedValues("INSUFFICIENT_LIQUIDITY",
"INSTRUMENT_NOT_OPEN",
"INSTRUMENT_DOES_NOT_EXIST",
"DUPLICATE_ORDER",
"QUANTITY_NOT_VALID",
"PRICE_NOT_VALID",
"INVALID_ORDER_INSTRUCTION",
"OUTSIDE_VOLATILITY_BAND",
"INVALID_INSTRUMENT_SYMBOL",
"ACCESS_DENIED",
"INSTRUMENT_SUSPENDED"),
new OptionalParam("expectedSessionRejectionReason").setAllowedValues("INVALID_TAG_NUMBER",
“REQUIRED_TAG_MISSING",
…
115. (C)opyright Dave Farley 2015
Language of the Problem Domain - DSL
@Test
public void shouldSupportPlacingValidBuyAndSellLimitOrders()
{
tradingUI.showDealTicket("instrument");
tradingUI.dealTicket.placeOrder("type: limit", ”bid: 4@10”);
tradingUI.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to buy 4.00 contracts at
tradingUI.dealTicket.dismissFeedbackMessage();
tradingUI.dealTicket.placeOrder("type: limit", ”ask: 4@9”);
tradingUI.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to sell 4.00 contracts at
}
@Test
public void shouldSuccessfullyPlaceAnImmediateOrCancelBuyMarketOrder()
{
fixAPIMarketMaker.placeMassOrder("instrument", "ask: 11@52", "ask: 10@51", "ask: 10@50", "bid: 10@49");
fixAPI.placeOrder("instrument", "side: buy", "quantity: 4", "goodUntil: Immediate", "allowUnmatched: true");
fixAPI.waitForExecutionReport("executionType: Fill", "orderStatus: Filled",
"side: buy", "quantity: 4", "matched: 4", "remaining: 0",
"executionPrice: 50", "executionQuantity: 4");
}
116. (C)opyright Dave Farley 2015
Language of the Problem Domain - DSL
@Test
public void shouldSupportPlacingValidBuyAndSellLimitOrders()
{
tradingUI.showDealTicket("instrument");
tradingUI.dealTicket.placeOrder("type: limit", ”bid: 4@10”);
tradingUI.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to buy 4.00 contracts at
tradingUI.dealTicket.dismissFeedbackMessage();
tradingUI.dealTicket.placeOrder("type: limit", ”ask: 4@9”);
tradingUI.dealTicket.checkFeedbackMessage("You have successfully sent a limit order to sell 4.00 contracts at
}
@Test
public void shouldSuccessfullyPlaceAnImmediateOrCancelBuyMarketOrder()
{
fixAPIMarketMaker.placeMassOrder("instrument", "ask: 11@52", "ask: 10@51", "ask: 10@50", "bid: 10@49");
fixAPI.placeOrder("instrument", "side: buy", "quantity: 4", "goodUntil: Immediate", "allowUnmatched: true");
fixAPI.waitForExecutionReport("executionType: Fill", "orderStatus: Filled",
"side: buy", "quantity: 4", "matched: 4", "remaining: 0",
"executionPrice: 50", "executionQuantity: 4");
}
117. (C)opyright Dave Farley 2015
Language of the Problem Domain - DSL
@Channel(fixApi, dealTicket, publicApi)
@Test
public void shouldSuccessfullyPlaceAnImmediateOrCancelBuyMarketOrder()
{
trading.placeOrder("instrument", "side: buy", “price: 123.45”, "quantity: 4", "goodUntil: Immediate”);
trading.waitForExecutionReport("executionType: Fill", "orderStatus: Filled",
"side: buy", "quantity: 4", "matched: 4", "remaining: 0",
"executionPrice: 123.45", "executionQuantity: 4");
}
118. (C)opyright Dave Farley 2015
Language of the Problem Domain - DSL
@Channel(fixApi, dealTicket, publicApi)
@Test
public void shouldSuccessfullyPlaceAnImmediateOrCancelBuyMarketOrder()
{
trading.placeOrder("instrument", "side: buy", “price: 123.45”, "quantity: 4", "goodUntil: Immediate”);
trading.waitForExecutionReport("executionType: Fill", "orderStatus: Filled",
"side: buy", "quantity: 4", "matched: 4", "remaining: 0",
"executionPrice: 123.45", "executionQuantity: 4");
}
119. (C)opyright Dave Farley 2015
Language of the Problem Domain - Evolving a DSL
• Write the test case first
• Whoever would normally write the case specifies the
language to express the idea
• Get a Developer to implement the DSL to
support the new test case
• It’s OK for the Dev to refine the language to make it
cleaner, more general - but keep it simple!
• Devs should keeping pushing the DSL to be clean of ANY
understanding of the “How”
• Think of this as designing a language. Have some rules
about what should be in and what should not.
121. (C)opyright Dave Farley 2015
DSL- Four Layer Structure
Test Case
(Executable
Specification)
Test Case
(Executable
Specification)
Test Case
(Executable
Specification)
Test Case
(Executable
Specification)
122. (C)opyright Dave Farley 2015
DSL- Four Layer Structure
Domain Specific Language
Test Case
(Executable
Specification)
Test Case
(Executable
Specification)
Test Case
(Executable
Specification)
Test Case
(Executable
Specification)
123. (C)opyright Dave Farley 2015
DSL- Four Layer Structure
Domain Specific Language
Test Case
(Executable
Specification)
Test Case
(Executable
Specification)
Test Case
(Executable
Specification)
Test Case
(Executable
Specification)
Protocol
Driver
(e.g. UI)
Protocol
Driver
(e.g. API)
External
System Stub
External
System Stub
124. (C)opyright Dave Farley 2015
DSL- Four Layer Structure
Domain Specific Language
Test Case
(Executable
Specification)
Test Case
(Executable
Specification)
Test Case
(Executable
Specification)
Test Case
(Executable
Specification)
Protocol
Driver
(e.g. UI)
System Under Test
Protocol
Driver
(e.g. API)
External
System Stub
External
System Stub
125. (C)opyright Dave Farley 2015
Properties of Good Acceptance Tests
• “What” not “How”
• Isolated from other tests
• Repeatable
• Uses the language of the problem domain
• Tests ANY change
• Efficient
126. (C)opyright Dave Farley 2015
Properties of Good Acceptance Tests
• “What” not “How”
• Isolated from other tests
• Repeatable
• Uses the language of the problem domain
• Tests ANY change
• Efficient
127. (C)opyright Dave Farley 2015
Testing with Time
• Test Cases should be deterministic
• Time is a problem for determinism - There are
two options:
• Ignore time
• Control time
128. (C)opyright Dave Farley 2015
Testing With Time - Ignore Time
Mechanism
Filter out time-based values in your test
infrastructure so that they are ignored
Pros:
• Simple!
Cons:
• Can miss errors
• Prevents any hope of testing complex time-based
scenarios
129. (C)opyright Dave Farley 2015
Mechanism
Treat Time as an external dependency, like an
external system - and Fake it!
Pros:
• Very Flexible!
• Can simulate any time-based scenario, with time under the
control of the test case.
Cons:
• Slightly more complex infrastructure
Testing With Time - Controlling Time
130. (C)opyright Dave Farley 2015
Testing With Time - Controlling Time
@Test
public void shouldBeOverdueAfterOneMonth()
{
book = library.borrowBook(“Continuous Delivery”);
assertFalse(book.isOverdue());
time.travel(“+1 week”);
assertFalse(book.isOverdue());
time.travel(“+4 weeks”);
assertTrue(book.isOverdue());
}
131. (C)opyright Dave Farley 2015
Testing With Time - Controlling Time
@Test
public void shouldBeOverdueAfterOneMonth()
{
book = library.borrowBook(“Continuous Delivery”);
assertFalse(book.isOverdue());
time.travel(“+1 week”);
assertFalse(book.isOverdue());
time.travel(“+4 weeks”);
assertTrue(book.isOverdue());
}
133. (C)opyright Dave Farley 2015
Testing With Time - Controlling Time
Test Infrastructure
Test
Case
Test
Case
Test
Case
Test
Case
System
UnderTest
public void someTimeDependentMethod()
{
time = System.getTime();
}
System
UnderTest
134. (C)opyright Dave Farley 2015
Testing With Time - Controlling Time
Test Infrastructure
Test
Case
Test
Case
Test
Case
Test
Case
System
UnderTest
include Clock;
public void someTimeDependentMethod()
{
time = Clock.getTime();
}
System
UnderTest
135. (C)opyright Dave Farley 2015
Testing With Time - Controlling Time
Test Infrastructure
Test
Case
Test
Case
Test
Case
Test
Case
System
UnderTest
include Clock;
public void someTimeDependentMethod()
{
time = Clock.getTime();
}
public class Clock {
public static clock = new SystemClock();
public static void setTime(long newTime) {
clock.setTime(newTime);
}
public static long getTime() {
return clock.getTime();
}
System
UnderTest
136. (C)opyright Dave Farley 2015
Testing With Time - Controlling Time
Test Infrastructure
Test
Case
Test
Case
Test
Case
Test
Case
System
UnderTest
include Clock;
public void someTimeDependentMethod()
{
time = Clock.getTime();
}
public void onInit() {
// Remote Call - back-channel
systemUnderTest.setClock(new TestClock());
}
public void time-travel(String time) {
long newTime = parseTime(time);
// Remote Call - back-channel
systemUnderTest.setTime(newTime);
}
Test Infrastructure
Back-Channel
public class Clock {
public static clock = new SystemClock();
public static void setTime(long newTime) {
clock.setTime(newTime);
}
public static long getTime() {
return clock.getTime();
}
System
UnderTest
137. (C)opyright Dave Farley 2015
Test Environment Types
• Some Tests need special treatment.
• Tag Tests with properties and allocate them
dynamically:
138. (C)opyright Dave Farley 2015
Test Environment Types
• Some Tests need special treatment.
• Tag Tests with properties and allocate them
dynamically:
@TimeTravel
@Test
public void shouldDoSomethingThatNeedsFakeTime()
…
@Destructive
@Test
public void shouldDoSomethingThatKillsPartOfTheSystem()
…
@FPGA(version=1.3)
@Test
public void shouldDoSomethingThatRequiresSpecificHardware()
…
139. (C)opyright Dave Farley 2015
Test Environment Types
• Some Tests need special treatment.
• Tag Tests with properties and allocate them
dynamically:
@TimeTravel
@Test
public void shouldDoSomethingThatNeedsFakeTime()
…
@Destructive
@Test
public void shouldDoSomethingThatKillsPartOfTheSystem()
…
@FPGA(version=1.3)
@Test
public void shouldDoSomethingThatRequiresSpecificHardware()
…
142. (C)opyright Dave Farley 2015
Properties of Good Acceptance Tests
• “What” not “How”
• Isolated from other tests
• Repeatable
• Uses the language of the problem domain
• Tests ANY change
• Efficient
143. (C)opyright Dave Farley 2015
Properties of Good Acceptance Tests
• “What” not “How”
• Isolated from other tests
• Repeatable
• Uses the language of the problem domain
• Tests ANY change
• Efficient
152. (C)opyright Dave Farley 2015
Make Test Cases Internally Synchronous
• Look for a “Concluding Event” listen for that in
your DSL to report an async call as complete
•
153. (C)opyright Dave Farley 2015
Make Test Cases Internally Synchronous
Example DSL level Implementation…
public String placeOrder(String params…)
{
orderSent = sendAsyncPlaceOrderMessage(parseOrderParams(params));
return waitForOrderConfirmedOrFailOnTimeOut(orderSent);
}
• Look for a “Concluding Event” listen for that in
your DSL to report an async call as complete
•
154. (C)opyright Dave Farley 2015
Make Test Cases Internally Synchronous
Example DSL level Implementation…
public String placeOrder(String params…)
{
orderSent = sendAsyncPlaceOrderMessage(parseOrderParams(params));
return waitForOrderConfirmedOrFailOnTimeOut(orderSent);
}
• Look for a “Concluding Event” listen for that in
your DSL to report an async call as complete
•
155. (C)opyright Dave Farley 2015
Make Test Cases Internally Synchronous
• Look for a “Concluding Event” listen for that in
your DSL to report an async call as complete
• If you really have to, implement a
“poll-and-timeout” mechanism in your test-
infrastructure
• Never, Never, Never, put a “wait(xx)” and expect
your tests to be (a) Reliable or (b) Efficient!
• Look for a “Concluding Event” listen for that in
your DSL to report an async call as complete
•
156. (C)opyright Dave Farley 2015
Make Test Cases Internally Synchronous
• Look for a “Concluding Event” listen for that in
your DSL to report an async call as complete
• If you really have to, implement a
“poll-and-timeout” mechanism in your test-
infrastructure
• Never, Never, Never, put a “wait(xx)” and expect
your tests to be (a) Reliable or (b) Efficient!
• Look for a “Concluding Event” listen for that in
your DSL to report an async call as complete
•
Anti-Pattern!
157. (C)opyright Dave Farley 2015
Scaling-Up
Artifact
Repository
Deployment Pipeline
Acceptance
Commit
Component
Performance
System
Performance
Staging Env.
Deployment
App.
Production Env.
Deployment
App.
Source
Repository
Manual Test Env.
Deployment
App.
158. (C)opyright Dave Farley 2015
Scaling-Up
Artifact
Repository
Deployment Pipeline
Acceptance
Commit
Component
Performance
System
Performance
Staging Env.
Deployment
App.
Production Env.
Deployment
App.
Source
Repository
Manual Test Env.
Deployment
App.
Deployment Pipeline
Commit
Manual Test Env.
Deployment
App.
Artifact
Repository
Acceptance Acceptance Test
Environment
159. (C)opyright Dave Farley 2015
Scaling-Up
Artifact
Repository
Deployment Pipeline
Acceptance
Commit
Component
Performance
System
Performance
Staging Env.
Deployment
App.
Production Env.
Deployment
App.
Source
Repository
Manual Test Env.
Deployment
App.
Deployment Pipeline
Commit
Manual Test Env.
Deployment
App.
Artifact
Repository
Acceptance Acceptance Test
Environment
AA
160. (C)opyright Dave Farley 2015
Scaling-Up
Artifact
Repository
Deployment Pipeline
Acceptance
Commit
Component
Performance
System
Performance
Staging Env.
Deployment
App.
Production Env.
Deployment
App.
Source
Repository
Manual Test Env.
Deployment
App.
Deployment Pipeline
Commit
Manual Test Env.
Deployment
App.
Artifact
Repository
Acceptance Acceptance Test
Environment
A
A
161. (C)opyright Dave Farley 2015
Scaling-Up
Artifact
Repository
Deployment Pipeline
Acceptance
Commit
Component
Performance
System
Performance
Staging Env.
Deployment
App.
Production Env.
Deployment
App.
Source
Repository
Manual Test Env.
Deployment
App.
Deployment Pipeline
Commit
Manual Test Env.
Deployment
App.
Artifact
Repository
Acceptance Acceptance Test
Environment
Test Host
Test Host
Test Host
Test Host
Test Host
A
A
162. (C)opyright Dave Farley 2015
Scaling-Up
Artifact
Repository
Deployment Pipeline
Acceptance
Commit
Component
Performance
System
Performance
Staging Env.
Deployment
App.
Production Env.
Deployment
App.
Source
Repository
Manual Test Env.
Deployment
App.
Deployment Pipeline
Commit
Manual Test Env.
Deployment
App.
Artifact
Repository
Acceptance
Acceptance
Acceptance Test
Environment
Test Host
Test Host
Test Host
Test Host
Test Host
AA
163. (C)opyright Dave Farley 2015
Data - SUT State
• One More Desirable Property…
• We should be able to run our tests anywhere
• Ideally we want SUT to be an a fairly ‘Neutral’
State
• Data Defining SUT State Falls into 3 categories:
• Transactional Data
• Reference Data
• Configuration Data
164. (C)opyright Dave Farley 2015
Data - SUT State
• One More Desirable Property…
• We should be able to run our tests anywhere
• Ideally we want SUT to be an a fairly ‘Neutral’
State
• Data Defining SUT State Falls into 3 categories:
• Transactional Data
• Reference Data
• Configuration Data
Use Prod
Data
Generate
in Scope of
Test
Use
Versioned
Test Data
165. (C)opyright Dave Farley 2015
Data - SUT State
• One More Desirable Property…
• We should be able to run our tests anywhere
• Ideally we want SUT to be an a fairly ‘Neutral’
State
• Data Defining SUT State Falls into 3 categories:
• Transactional Data
• Reference Data
• Configuration Data
Use Prod
Data
Generate
in Scope of
Test
Use
Versioned
Test Data
166. (C)opyright Dave Farley 2015
Data - SUT State
• One More Desirable Property…
• We should be able to run our tests anywhere
• Ideally we want SUT to be an a fairly ‘Neutral’
State
• Data Defining SUT State Falls into 3 categories:
• Transactional Data
• Reference Data
• Configuration Data
Use Prod
Data
Generate
in Scope of
Test
Use
Versioned
Test Data
167. (C)opyright Dave Farley 2015
Data - SUT State
• One More Desirable Property…
• We should be able to run our tests anywhere
• Ideally we want SUT to be an a fairly ‘Neutral’
State
• Data Defining SUT State Falls into 3 categories:
• Transactional Data
• Reference Data
• Configuration Data
Use Prod
Data
Generate
in Scope of
Test
Use
Versioned
Test Data
169. (C)opyright Dave Farley 2015
Anti-Patterns in Acceptance Testing
• Don’t use UI Record-and-playback Systems
170. (C)opyright Dave Farley 2015
Anti-Patterns in Acceptance Testing
• Don’t use UI Record-and-playback Systems
• Don’t Record-and-playback production data. This has a role, but it is NOT
Acceptance Testing
171. (C)opyright Dave Farley 2015
Anti-Patterns in Acceptance Testing
• Don’t use UI Record-and-playback Systems
• Don’t Record-and-playback production data. This has a role, but it is NOT
Acceptance Testing
• Don’t dump production data to your test systems, instead define the absolute
minimum data that you need
172. (C)opyright Dave Farley 2015
Anti-Patterns in Acceptance Testing
• Don’t use UI Record-and-playback Systems
• Don’t Record-and-playback production data. This has a role, but it is NOT
Acceptance Testing
• Don’t dump production data to your test systems, instead define the absolute
minimum data that you need
• Don’t assume Nasty Automated Testing Products(tm) will do what you need. Be very
sceptical about them. Start with YOUR strategy and evaluate tools against that.
173. (C)opyright Dave Farley 2015
Anti-Patterns in Acceptance Testing
• Don’t use UI Record-and-playback Systems
• Don’t Record-and-playback production data. This has a role, but it is NOT
Acceptance Testing
• Don’t dump production data to your test systems, instead define the absolute
minimum data that you need
• Don’t assume Nasty Automated Testing Products(tm) will do what you need. Be very
sceptical about them. Start with YOUR strategy and evaluate tools against that.
• Don’t have a separate Testing/QA team! Quality is down to everyone - Developers
own Acceptance Tests!!!
174. (C)opyright Dave Farley 2015
Anti-Patterns in Acceptance Testing
• Don’t use UI Record-and-playback Systems
• Don’t Record-and-playback production data. This has a role, but it is NOT
Acceptance Testing
• Don’t dump production data to your test systems, instead define the absolute
minimum data that you need
• Don’t assume Nasty Automated Testing Products(tm) will do what you need. Be very
sceptical about them. Start with YOUR strategy and evaluate tools against that.
• Don’t have a separate Testing/QA team! Quality is down to everyone - Developers
own Acceptance Tests!!!
• Don’t let every Test start and init the app. Optimise for Cycle-Time, be efficient in
your use of test environments.
175. (C)opyright Dave Farley 2015
Anti-Patterns in Acceptance Testing
• Don’t use UI Record-and-playback Systems
• Don’t Record-and-playback production data. This has a role, but it is NOT
Acceptance Testing
• Don’t dump production data to your test systems, instead define the absolute
minimum data that you need
• Don’t assume Nasty Automated Testing Products(tm) will do what you need. Be very
sceptical about them. Start with YOUR strategy and evaluate tools against that.
• Don’t have a separate Testing/QA team! Quality is down to everyone - Developers
own Acceptance Tests!!!
• Don’t let every Test start and init the app. Optimise for Cycle-Time, be efficient in
your use of test environments.
• Don’t include Systems outside of your control in your Acceptance Test Scope
176. (C)opyright Dave Farley 2015
Anti-Patterns in Acceptance Testing
• Don’t use UI Record-and-playback Systems
• Don’t Record-and-playback production data. This has a role, but it is NOT
Acceptance Testing
• Don’t dump production data to your test systems, instead define the absolute
minimum data that you need
• Don’t assume Nasty Automated Testing Products(tm) will do what you need. Be very
sceptical about them. Start with YOUR strategy and evaluate tools against that.
• Don’t have a separate Testing/QA team! Quality is down to everyone - Developers
own Acceptance Tests!!!
• Don’t let every Test start and init the app. Optimise for Cycle-Time, be efficient in
your use of test environments.
• Don’t include Systems outside of your control in your Acceptance Test Scope
• Don’t Put ‘wait()’ instructions in your tests hoping it will solve intermittency
179. (C)opyright Dave Farley 2015
Tricks for Success
• Do Ensure That Developers Own the Tests
• Do Focus Your Tests on “What” not “How”
180. (C)opyright Dave Farley 2015
Tricks for Success
• Do Ensure That Developers Own the Tests
• Do Focus Your Tests on “What” not “How”
• Do Think of Your Tests as “Executable Specifications”
181. (C)opyright Dave Farley 2015
Tricks for Success
• Do Ensure That Developers Own the Tests
• Do Focus Your Tests on “What” not “How”
• Do Think of Your Tests as “Executable Specifications”
• Do Make Acceptance Testing Part of your “Definition of Done”
182. (C)opyright Dave Farley 2015
Tricks for Success
• Do Ensure That Developers Own the Tests
• Do Focus Your Tests on “What” not “How”
• Do Think of Your Tests as “Executable Specifications”
• Do Make Acceptance Testing Part of your “Definition of Done”
• Do Keep Tests Isolated from one-another
183. (C)opyright Dave Farley 2015
Tricks for Success
• Do Ensure That Developers Own the Tests
• Do Focus Your Tests on “What” not “How”
• Do Think of Your Tests as “Executable Specifications”
• Do Make Acceptance Testing Part of your “Definition of Done”
• Do Keep Tests Isolated from one-another
• Do Keep Your Tests Repeatable
184. (C)opyright Dave Farley 2015
Tricks for Success
• Do Ensure That Developers Own the Tests
• Do Focus Your Tests on “What” not “How”
• Do Think of Your Tests as “Executable Specifications”
• Do Make Acceptance Testing Part of your “Definition of Done”
• Do Keep Tests Isolated from one-another
• Do Keep Your Tests Repeatable
• Do Use the Language of the Problem Domain - Do try the DSL approach, whatever
your tech.
185. (C)opyright Dave Farley 2015
Tricks for Success
• Do Ensure That Developers Own the Tests
• Do Focus Your Tests on “What” not “How”
• Do Think of Your Tests as “Executable Specifications”
• Do Make Acceptance Testing Part of your “Definition of Done”
• Do Keep Tests Isolated from one-another
• Do Keep Your Tests Repeatable
• Do Use the Language of the Problem Domain - Do try the DSL approach, whatever
your tech.
• Do Stub External Systems
186. (C)opyright Dave Farley 2015
Tricks for Success
• Do Ensure That Developers Own the Tests
• Do Focus Your Tests on “What” not “How”
• Do Think of Your Tests as “Executable Specifications”
• Do Make Acceptance Testing Part of your “Definition of Done”
• Do Keep Tests Isolated from one-another
• Do Keep Your Tests Repeatable
• Do Use the Language of the Problem Domain - Do try the DSL approach, whatever
your tech.
• Do Stub External Systems
• Do Test in “Production-Like” Environments
187. (C)opyright Dave Farley 2015
Tricks for Success
• Do Ensure That Developers Own the Tests
• Do Focus Your Tests on “What” not “How”
• Do Think of Your Tests as “Executable Specifications”
• Do Make Acceptance Testing Part of your “Definition of Done”
• Do Keep Tests Isolated from one-another
• Do Keep Your Tests Repeatable
• Do Use the Language of the Problem Domain - Do try the DSL approach, whatever
your tech.
• Do Stub External Systems
• Do Test in “Production-Like” Environments
• Do Make Instructions Appear Synchronous at the Level of the Test Case
188. (C)opyright Dave Farley 2015
Tricks for Success
• Do Ensure That Developers Own the Tests
• Do Focus Your Tests on “What” not “How”
• Do Think of Your Tests as “Executable Specifications”
• Do Make Acceptance Testing Part of your “Definition of Done”
• Do Keep Tests Isolated from one-another
• Do Keep Your Tests Repeatable
• Do Use the Language of the Problem Domain - Do try the DSL approach, whatever
your tech.
• Do Stub External Systems
• Do Test in “Production-Like” Environments
• Do Make Instructions Appear Synchronous at the Level of the Test Case
• Do Test for ANY change
189. (C)opyright Dave Farley 2015
Tricks for Success
• Do Ensure That Developers Own the Tests
• Do Focus Your Tests on “What” not “How”
• Do Think of Your Tests as “Executable Specifications”
• Do Make Acceptance Testing Part of your “Definition of Done”
• Do Keep Tests Isolated from one-another
• Do Keep Your Tests Repeatable
• Do Use the Language of the Problem Domain - Do try the DSL approach, whatever
your tech.
• Do Stub External Systems
• Do Test in “Production-Like” Environments
• Do Make Instructions Appear Synchronous at the Level of the Test Case
• Do Test for ANY change
• Do Keep your Tests Efficient