Testaus 2014 -seminaari: Arto Kiiskinen, Mirasys Oy. Case Mirasys: Toiminnoiltaan laajan ohjelmistotuotteen testaus nopeasti muuttuvassa vaatimusympäristössä
This document discusses testing challenges for a software company that develops a video management system product with frequent changes to requirements, priorities, and hardware platforms. The key challenges include long-term planning difficulties due to constantly changing priorities, limited testing resources compared to the size of the product and number of developers, and setup time for complex integration tests. It provides recommendations for achieving high team effectiveness through minimizing interruptions, focusing testing on individual stories, and establishing trust in story-level testing to reduce risks for major releases.
Report
Share
Report
Share
1 of 31
More Related Content
Testaus 2014 -seminaari: Arto Kiiskinen, Mirasys Oy. Case Mirasys: Toiminnoiltaan laajan ohjelmistotuotteen testaus nopeasti muuttuvassa vaatimusympäristössä
1. (Development and)
Testing a Software Product with Frequent
Changes to Requirements and Priorities and
Regular Changes in Hardware Platform
Arto Kiiskinen, Product Development Manager, Mirasys
2. Contents
0..10 minutes
10..30 minutes
30..40 minutes
The company, the product,
the people and the
challenges
Scrum experience 2008 –
2012
Kanban experience 2012 –
2014
Testing: How to survive in
chaos?
DO’s and DONT’s
Future: what we SHOULD do
3. Mirasys Oy
• Established 1997
• Sales in over 40+ countries
• Over 35,000 customers world-wide
• Over 500 partners
• More than 600,000 cameras connected
• Supports over 1600 camera models
• Software R&D 20%
Arto Kiiskinen
Product Development Manager / Mirasys HQ R&D
http://www.linkedin.com/pub/arto-kiiskinen/0/611/a98
HelsinkiStockholmTallinn
London
Munich
Barcelona
Milan
Dubai
Bangalore Bangkok
Johannesburg
US
Sao Paulo
= Mirasys sales offices and Mirasys representatives
= Mirasys Headquarters
Lusaka
Bogota
Mexico City
Hyderabad
Doha
4. • Open camera architecture
• Onvif –support
• Continuous camera driver development
• Close cooperation with leading IP camera
manufacturers
• Android
• IOS
• Windows
Agile Virtual Matrix -videowall Workstation, mobile and web user interfaces
LAN / WAN / Internet
Over 1 500
compatible IP camera
models
- Analog
- IP
Mirasys DVMS server
- Video Streaming
-Multicasting,Multistream
-Edge storage
-ThruCast – direct stream from
camera
- Video Recording
- Video Content Analysis
- 2 way audio
- Management - Reporting
- ANPR - I / O
- Alarms - Networking
- High scalability - Remote use
- High availability - Failover
- automatic backup and restore
Integrations
- Access control
- Fire alarm system
- Burglar alarm
- Cash register
- CBRN
- Perimeter monitoring
- Building portal
- Parking
MIRASYS Video Management System
5. Typical System
Agile Virtual Matrix 3 x 4
AVM Operator Consoles and local workstations
Mirasys DVMS
Recorder Node
Display Servers
IP cameras
Mirasys DVMS
Recorder Node
Analog
cameras
6. Scope of Product
HISTORY
First generation product (DINA) 1997 –
2005
Current product (VIPER) 2005->
Current sales release 7.0.1
Legacy support
Major release -1
6.0.1, 6.1.1, 6.1.5, 6.1.6, 6.2.1,
6.2.5, 6.4.0, 6.4.1, 6.4.3
First release 2006
500k LOC
1600 + test cases
Team
ONSITE
3-4 developers
1 tester (60%) + 1 tester (40%) + 1 tester (20%)
OFFSHORE Camera driver team (Russia)
3 developers (Novgorod)
1 tester (Novgorod)
1 developer for mobile apps (IOS and Android) (Krasnoyarsk)
8. Mirasys is smaller than most of the main competitors
We want to be more
agile
Choose some
customers
Choose some markets
Choose to serve some
big cases with
customized changes
But being agile has
some drawbacks
It means R&D has to
be flexible
And that means...
9. Challenges
Competing against bigger companies worldwide
Pressure to support lot of camera models
Pressure to develop new features (2-3 year backlog)
Large project deals lucrative – high priority items pop up regularly
But almost always, require new development -> a high priority story appears ”out of nowhere”
Schedule to deliver forces a reset of release planning (and possibly multiple branches)
Load from Support to R&D unpredictable – short term planning difficult
3-5 bugs from field a week passed from support to R&D
Long term planning is very difficult
Story priorities are always changing – new high priority stories pop up
Detailed specification is difficult – when something is in specification stage – a new story
pops up and has to be started urgently with poor specification – old estimates and
specifications ”age” and are soon obsolete – waste.
PC hardware is short-lived. –> HW testing is continuous . Cost of hardware
faults if shipped to customer very high.
10. Testing Specific Challenges
Lot of legacy features
Product has been in use ~10 years.
Engineers have left the company -> when remaining persons are not familiar with the area of
code, it is harder to estimate impact of a change -> it is harder to estimate what cases need
retesting after fix.
Limited testing resources (1,3 persons) testing new non-driver features
developed by 4-5 developers
Complete testing of release would take months – impossible
Even running most important 30% of all test cases takes 1.5 months
Impossible to replicate real world installations completely
Customers are using system with hundreds of servers and thousands of of cameras (15000
+ extreme case)
Setup time of verification of some integration cases is long
Might take a day to configure a system for a 5 minute test.
Low amount of test automation
By being agile and wanting to serve our customers
Better than our bigger and slower competitors – we subject ourselves
To constant changes to backlog priorities - CHAOS
11. How to achieve highest possible team efficiency?
The MANAGER PROBLEM:
How to achieve best results in
this environment?
12. Main principles!!
1. Do one thing at a time, as effectively as possible
2. Minimize regression
3. Maintain shippable product quality at all times
4. Minimize interruptions, finish what was started
5. Story/Bug priorities rule what is started next
13. Achieving High Team Effectiveness
SW R&D Engineer works at high efficiency when
He feels he is working on the most important issue available to start at
the moment
He is never idle and he understands what task is next in line
He has or can get necessary knowledge&skills
He has clear specification or can ask for decisions instantly
He can interact with others to ”sound” a design or a fix proposal (pair
programming, code reviews)
He has peaceful environment allowing focused work (No unnecessary
interruptions)
He works in ”sprints” of 3-5 days and can get assistance to verify quality
and specification at the end of the sprints
He has a mindset to watch out for breaking legacy features – and can
instruct tester to verify areas which he thinks could be affected.
He is working in a team with good team spirit
14. Achieving High Team Effectiveness
SW QA Engineer / Tester works at high efficiency when
He feels he is working on the most important issue available to start at
the moment
He is never idle
He has or can get necessary knowledge&skills
He has access to necessary level of feature under test documentation
He gets information what other areas should be tested for testing
regression
He feels that his findings are taken seriously and feels no fear to report
an issue.
He is interested to find ways that the solution could be improved
He can focus on the activity – plan the testing and execute the testing
without too many interruptions
Suitably flexible test environment exist to reduce setup times.
He is working in a team with good team spirit
15. Achieving High Team Effectiveness
Communicate!
Try to achieve balance between letting people feel they can always ask
questions and help,
BUT
Try to watch out that developers are not interrupted all the time.
Developers need peace to be able to focus. Testers tolerate more
interruptions than developers.
Even though we dont use Scrum anymore, we have a
”daily standup” meeting, 20 minutes each day.
- New Errors
- Work in progress
- Everyone has enough work?
- Any issues / problems / observations / ideas?
16. Priority Rules
Priorities
SHOWSTOPPER/CRITICAL BUG – current work is halted and bug is
investigated
MAJOR BUG – current work is finished and bug is investigated next.
Housekeeping tasks
Next new feature (in order of priority)
Minor errors
New feature stories
A story list must exist!
Priorties must exist!
A release plan for current release 3-5 months must exist!
A release plan for next releases is highly desireable
AND
Nobody does work without a story, bug id
or some other Ticket
17. Experience with Scrum
2, 3 and 4 week sprints trialed
Support load messes up planning
UI expert on site only 2 days a week – lot of waiting for UI spec.
Challenge to make the load even in beginning and end of sprints
Story estimation difficult, effort easily climbs and functionality finished
only partially, and have to continue in next sprint
Testing starts late in sprint and bugfixing pushes the story to next sprint.
Result:
Almost never finish a story in single sprint – always moved to next sprint
Demos and reviews cannot be arranged at sprint end
Less sense of accomplishment for team
Sprint planning takes time but plans never realised ->waste
A lot of effort is ”invisible” making velocity estimation difficult -> in turn
making estimate of release day uncertain -> sales and marketing are
hurting because unclear schedule, content and documentation
19. Experience with Kanban
Better quality
Developers focus on one story at a time and take as long as he needs
Test handover includes documentation in internal wiki
Testing is usually done in 2-3 cycles for each story
Acceptance test is done per story
Demos are arranged every 2 months and contain all stories done so far
Marketing documentation and videos about new features are done just
after finishing story
Goal: Testing of changes to software done as soon as possible after
making the changes. Expose faults fast – and trust the result!!
Peace of mind
No sprint goals. When story is done, it is DONE!
Only showstopper bugs can stop work of a story or item. Usually
developers can focus on the story from beginning to end without
interruption
20. Goal should be that story testing uncovers all errors
Release day
Design
Estimation
Testing
Development
Needed
Maturization
Period
Release testing
All errors should be found
Here!!!
You should always AIM to have
Release Quality SW at end of each story!!
21. Developer should
Check of PO and UI acceptance regularly during implementation
CODE REVIEWS / PAIR PROGRAMMING!! (Or other non-testing way to detect
errors)
Before sending the feature to testing
Brief documentation about the feature, how to use it, how to install it,
Preferably subject code for peer review (somehow)
Spend time to think what could have broken with the changes. – and instruct tester what
to check (for regression)
Never have issues in ”waiting for testing” – as soon as it is ready from
R&D – do at least a cursory test on it.
Tester should
Plan the test of the feature
Execute and document the tests including checking for regression in other areas.
Think what new test cases are needed and create the test cases
Can testing be automated?
Observe any peculiarities
Observe if usability is good
Confirm with team lead or product owner about findings – what is severity of findings
Pass the results back to developer
After 1 – 3 cycles the item should be ready.
Recommended
22. AVOID
Too big releases
Too big stories (have upper limit for developer hours/days)
Too many Releases
Too many branches
23. To avoid too much content & overloading
testing close to release day
Know or Estimate team velocity (capability to deliver)
Focus on
Estimating the release day with given content or
Content with given release day
Focus on the testing for each story! DO NOT TRUST the
”Release testing” at the end.
Rather, Use Release testing closer to release day to seek for
variable test scenarios and install to different test
environments. Experiment. Try to think what will customer do
when he gets his hand to the SW.
24. Take your software out of the comfort zone!
We notice that with our continuous process, bugs are ironed
out smoothly, but closer to release or at beta testing phase,
when features are taken into use in different environment, new
bugs arise.
Best test organization in the world would maintain different size
and configuration environments and automatically or regularly
excercise the code in all of them routinely
If the above is too difficult –
systematic beta testing or a controlled ”staged” launch will help also.
Or a indipendent ”approval into production” by the production team.
25. Velocity estimation
Facts about estimations:
Estimates are generally HUGELY inaccurate.
Doing detailed estimations is waste in itself – about the only valuable outcome is that
estimating stuff generally leads to discussion about the design and the user interface etc.
The only hope we have is that we are consistently HUGELY inaccurate the same way!!
Recommendation
Estimate stories crudely as soon as you have most of the details
We use ”ideal hours” ... Story is between 20 and 100 ideal hours long.
Mark this estimate down, and follow how many hours team gets done regularly
After 1 year of follow up, we averaged 50 hours a week for the whole team.
We also noticed that if story was over 100 hours long, the actual speed dropped
considerably. Big stories were spending too long in development, and then resulted in tens of
bugs when tested.
Find the balance: for us it was 80 – 100 hours: Stories bigger than this will always be split to two.
Use Velocity to estimate release schedule and content
Velocity will not change much – you can estimate stories crudely, and then use team velocity
to estimate when they would be done (Code complete)
Add to that any needed maturization period (for us based on historical data it was 6 weeks)
and you have likely release date.
We have been able to release ON THE PLANNED DAY with this approach
26. Dont forget
Retrospectives
1-2 month intervals, sit down with the team to review
What we have excelled in?
What is going ok?
What is making life difficult?
Continuously strive to fix the things that are identified as making life
difficult by the time next retrospective
Person Development and longer term projects
With Kanban it is easy to just focus on stories, and have less focus on
large, long term development activities: These need special handling.
Development of new skills needs to have special system.
27. Testing approach for the whole release
Target is to be able to trust the testing that is done for the story
After each story is done, you should have ”releaseable software”
because you constantly fix any crit / major bugs and you only
consider a story done when all majors are fixed.
You should however plan for ”release testing”
What we do is – 2-3 months before release day, plan the release testing
From all test cases, select the ones you would like to test – based on what
has changed
Then based on what is still ongoing, work with your senior SW developers to
understand what tests can be run while development is still ongoing. What
are the SW areas they are not touching anymore?
Start testing from those areas, with focus on the difficult and risky areas
first.
Check weekly what tests can be executed next.
Increase the ”release testing” task priority the closer you get to code
complete. After code complete, release testing should be high priority for
testers.
28. Trust means..
To be able to trust the testing in a story would mean
In perfect world
When developer hands over the story to tester, he would be perfectly
able to list the things that need regression check
It is all too easy to focus too much on the funtionality and forget the
regression checks.
Code reviews help to identify risky areas in code, that are difficult to find
by executing tests!!
29. Automate ... If you can
Use and develop tools to assist testing
Virtual camera simulation software
Simulators and scripts that simulate load the system as in a customer
environment with hundreds of servers and thousands of cameras.
Repeat action tools
Simple tools like ”Do it Again” to repeat mouse and keyboard events
We use this for errors that are hard to repeat and require repeating
same action over and over... Tool is easy to use.
GUI tools for automation
We use sikuli and python
Sikuli is free platform for automating tasks, easy to learn and use, but it
takes some python skills to make complex things with it.
We use it for uncovering memory leaks that are easy to miss in normal
testing (take thousands of actions to reveal themselves)
30. Checklist for Survival
Automate (but only selectively, where you get most value)
Code Reviews!!
Trust (Trust the testing for each story)
Experiment & Variate (use release testing to try load the software in
different ways & take your software out of its comfort zone. Install it to
brand new PC or take it outside to a new environment)
Protect (Protect your team from interruptions – have clear rules for
handling unexpected things; Protect release plan from fragmenting to too
many releases and branches; Protect the backlog – make sure priorization
process exists and is working; Protect the team spirit)
Improve (have regular retrospectives to get rid of painful issues)
Develop (dont abandon person skill development or long term
improvement projects – have separate plan for them)
Iterate (Iterating a story early or before its development is most valuable)
Document (too little is better than too much)
Communicate (too much is better than too little; face to face > chat/phone
> email)
Value (your tester’s opinion if he complains it is most likely true)