Testing Software Systems
Testing Software Systems
1
1-Principle of Testing
• What is software testing
• Testing terminology
• Why testing is necessary
• Fundamental test process
• Re-testing and regression testing
• Expected results
• Prioritisation of tests
2
1.1-What is Software Testing
What people usually think: Professional approach:
• Second class carrier option • Respected discipline in software
• Verification of a running program development
• Manual testing only • Reviews, code inspections, static
• analysis, etc.
Boring routine tasks
• Using test tools, test automation
• Analysis, design, programming test
scripts, evaluation of results, etc.
3
1.1-What is Software Testing (2)
Definition:
• Testing is the demonstration that errors are NOT preset in the
program?
• Testing shows that the program performs its intended functions
correctly?
• Testing is the process of demonstrating that a program does what is
supposed to do?
4
1.1-What is Software Testing (3)
• From testing user requirements to monitoring the system in operation
• From testing the functionality to checking all other aspects of software:
– Documents (specifications)
– Desing (model)
– Code
– Code+platform
– Production, acceptance
– Usage, business process
• Verification: answers the question: “Have we done the system correctly?”
• Validation: answers the question: “Have we done the correct system?”
5
1.1-What is Software Testing (4)
Realities in Software Testing
• Testing can show the presence of errors but cannot show the absence of
errors (Dijkstra)
• All defects can not be found
• Testing does not create quality software or remove defects
• Building without faults means – among other – testing very early
• Perfect development process is impossible, except in theory
• Perfect requirements: cognitive impossibility
6
1.2-Testing Terminology
Testing terminology
• Not generally accepted set of terms
• ISEB follows British Standards BS 7925-1 and BS 7925-2
• Other standards in software testing provide partial terminologies
7
1.2-Testing Terminology (2)
Why Terminology?
• Poor communication
• Example: component – module – unit – basic – design – developer,...
testing
• There is no ”good” and ”bad” terminology, only undefined and defined
• Difficult to describe processes
• Difficult to describe status
8
1.3-Why Testing is Necessary
Depreciation of Software Testing
• Due to software errors the U.S. business loss is ~ $60 billions.
• 1/3 of software errors can be avoided by better testing process
National Institute of Standarts and Technology 2002
• Testing process in most software companies is on lowest levels from CMMI
model (usually 1 or 2)
Software Enginering Institute CMU of Pitsburg
• All current software development models include software testing as an
essential part
9
1.3-Why Testing is Necessary (2)
Testing in Busine$$ Terms
money Development products
money Testing
risk information
money Testing
bug information
process information
10
1.3-Why Testing is Necessary (3)
Testing decreases cost
Cost of finding and
correcting fault
100 x
10 x
1x
0.1 x
12
1.3-Why Testing is Necessary (5)
Complexity
• Software – and its environment – are too complex to exhaustively test
their behaviour
• Software can be embedded
• Software has human users
• Software is part of the organization’s workflow
13
1.3-Why Testing is Necessary (6)
How much testing?
• This is a risk-based, business decision
– Test completion criteria
– Test prioritization criteria
– Decision strategy for the delivery
– Test manager presents products quality
• Test is never ready
• The answer is seldom ”more testing” but rather ”better testing”, see
the completion criteria:
– All test cases executed
– All test cases passed
– No unresolved (serious) incident reports
– Pre-defined coverage achieved
– Required reliability (MTBF) achieved
– Estimated number of remaining faults low enough
14
1.3-Why Testing is Necessary (7)
Exhaustive Testing
• Exhaustive testing is impossible
• Even in theory, exhaustive testing is wasteful because it does not prioritize
tests
• Contractual requirements on testing
• Non-negligent practice important from legal point of view
15
1.3-Why Testing is Necessary (8)
Risk-Based Testing
• Testing finds faults, which – when faults have been removed – decreases
the risk of failure in operation
• Risk-based testing
• Error: the ”mistake” (human, process or machine) that introduces a fault
into software
• Fault: ”bug” or ”defect”, a faulty piece of code or HW
• Failure: when faulty code is executed, ti may lead to incorrect results (i.e.
to failure)
16
1.3-Why Testing is Necessary (9)
Cost of Failure
• Reliability: the probability of no failure
• Famous: American Airlines, Ariane 5 rocket, Heathrow Terminal 5
• Quality of life
• Safety-critical systems
• Embedded systems
• Usability requirements for embedded systems and Web applications
17
1.4-Fundamental Test Process
Test Process Definition
• Test Planning
• Test Specification
• Test Execution
• Test Recording & Evaluation
• Completion Criteria
18
1.4-Fundamental Test Process (2)
Test Planning
Test
Process
Test
Strategy
Applied
Test
Process
Project
Specification Test
Plan
Exceptions
to the Test
Strategy
19
1.4-Fundamental Test Process (3)
Test Plan’s Goal
• High-level test plan and more detailed test plans
• Related to project plan
• Follows QA plan
• Configuration management, requirements, incident management
20
1.4-Fundamental Test Process (4)
Test Specification
• Test specification defines what to test
• Test specification is part of testware
• Basic building blocks of test specifications are test cases
• Test specification – instruction – script
• Test specification – requirements
• Test specification - reporting
21
1.4-Fundamental Test Process (5)
Test Case
• Unique name/title
• Unique ID
• Description
• Preconditions / prerequisites
• Actions (steps)
• Expected results
22
1.4-Fundamental Test Process (6)
Test Execution
• Manual
• Automated
• Test sequence
• Test environment
• Test data
23
1.4-Fundamental Test Process (7)
Test Recording & Evaluation
• Recording actual outcomes and comparison against expected outcomes
• Off-line test result evaluation
• Test log
• Test report
• Recording test coverage
• Incident management
24
1.4-Fundamental Test Process (8)
Test Completion
• Test completion criteria must be specified in advance
• Decision strategy for the release/delivery decision must be specified in
advance
• Test manager is responsible for the estimation and presentation of the
product quality, not for release/delivery decision
1. Run TC
2. Passed TC
3. Failed TC
4. Executed TC
5. Failure intensity
6. Number of incident reports
7. Estimation of product quality
8. Reliability of this estimation
9. Projected estimation of product quality
25
1.4-Fundamental Test Process (9)
Completion Criteria
• All test cases executed
• All test cases passed
• No unresolved incident reports
• No unresolved serious incident reports
• Number of faults found
• Pre-defined coverage achieved
– Code coverage
– Functional coverage
– Requirements coverage
– If not, design more test cases
• Required reliability (MTBF) achieved
• Estimated number of remaining faults low enough
26
1.5-Re-Testing and Regression Testing
Definitions
• Re-testing: re-running of test cases that caused failures during previous
executions, after the (supposed) cause of failure (i.e. fault) has been fixed,
to ensure that it really has been removed successfully
• Regression testing: re-running of test cases that did NOT cause failures
during previous execution(s), to ensure that they still do not fail for a new
system version or configuration
• Debugging: the process of identifying the cause of failure; finding and
removing the fault that caused failure
27
1.5-Re-Testing and Regression Testing
(2)
Re-testing
• Re-running of the test case that caused failure previously
• Triggered by delivery and incident report status
• Running of a new test case if the fault was previously exposed by chance
• Testing for similar or related faults
28
1.5-Re-Testing and Regression Testing
(3)
Regression Testing Areas
• Regression due to fixing the fault (side effects)
• Regression due to added new functionality
• Regression due to new platform
• Regression due to new configuration or after the customization
• Regression and delivery planning
29
1.5-Re-Testing and Regression Testing
(4)
Regression Schemes
• Less frequent deliveries
• Round-robin scheme
• Additional selection of test cases
• Statistical selection of test cases
• Parallel testing
• “Smoke-test” for emergency fixes
• Optimisation or regression suite:
– General (understanding system, test case objectives, test coverage)
– History (decreased regression for stable functionality)
– Dependencies (related functionality)
– Test-level co-ordination (avoiding redundant regression on many levels)
30
1.5-Re-Testing and Regression Testing
(5)
Regression and Automation
• Regression test suites under CM control
• Incident tracking for test cases
• Automation pays best in regression
• Regression-driven test automation
• Incremental development
31
1.6-Expected Results
Why Necessary?
• Test = measuring quality = comparing actual outcome with expected
outcome
• What about performance measurement?
• Results = outcomes; outcomes ≠ outputs
• Test case definition: preconditions – inputs – expected outcomes
• Results are part of testware – CM control
32
1.6-Expected Results (2)
Types of Outcomes
• Outputs
• State transitions
• Data changes
• Simple and compound results
• “Long-time” results
• Quality attributes (time, size, etc.)
• Non-testable?
• Side-effects
33
1.6-Expected Results (3)
Sources of Outcomes
Finding out or calculating correct expected outcomes/results is often more
difficult than can be expected. It is a major task in preparing test cases.
• Requirements
• Oracle
• Specifications
• Existing systems
• Other similar systems
• Standards
• NOT code
34
1.6-Expected Results (4)
Difficult Comparisons
• GUI
• Complex outcomes
• Absence of side-effects
• Timing aspects
• Unusual outputs (multimedia)
• Real-time and long-time difficulties
• Complex calculations
• Intelligent and “fuzzy” comparisons
35
1.7-Prioritization of Tests
Why Prioritize Test Cases?
• Decide importance and order (in time)
• “There is never enough time”
• Testing comes last and suffers for all other delays
• Prioritizing is hard to do right (multiple criteria with different weights)
36
1.7-Prioritization of Tests (2)
Prioritization Criteria
• Severity (failure) • What the customer wants
• Priority (urgency) • Change proneness
• Probability • Error proneness
• Visibility • Business criticality
• Requirement priorities • Complexity (test object)
• Feedback to/from development • Difficult to correct
• Difficulty (test)
37
1.7-Prioritization of Tests (3)
Prioritization Methods
• Random (the order specs happen)
• Experts’ “gut feeling”
• Based on history with similar projects, products or customers
• Statistical Usage Testing
• Availability of: deliveries, tools, environment, domain experts…
• Traditional Risk Analysis
• Multidimensional Risk Analysis
38
2-Testing through the Lifecycle
• Models for testing
• Economics of testing
• Test planning
• Component testing
• Component integration testing
• System testing (functional)
• System testing (non-functional)
• System integration testing
• Acceptance testing
• Maintenance testing
39
2.1-Models for Testing
Verification, Validation and Testing
• Verification: The process of evaluation a system or component to
determine whether the products of the given development phase satisfy
the conditions imposed at the start of that phase – building the system
right
• Validation: The determination of the correctness of the products of
software development with respect to the user needs and requirements –
building the right system
• Testing: The process of exercising software to verify that is satisfies
specified requirements and to detect errors
40
2.1-Models for Testing (2)
Waterfall Model
Requirements
Analysis
Functional
Specifications
Design
Specifications
Coding
Testing
Maintenance
41
2.1-Models for Testing (3)
Issues of traditional (waterfall) model
Requirements 1. Requirement descriptions incomplete
Analysis
2. Analysis paralysis
Functional
Specifications
3. Loss of information
Design
4. Virtual reality
Specifications 7. Workarounds
Coding
8. Testing squeezed
42
2.1-Models for Testing (4)
Waterfall Iterative (RUP, SCRUM) Extreme programming (XP)
Analysis
Design
Time
Coding
Testing
Waterfall Iterative XP
Coding Component
Implementation
errors Testing
Code
45
2.2-Economics of Testing
Testing Creates Value or Not?
• Reduction of business risk
• Ensuring time to market
• Control over the costs
46
2.2-Economics of Testing
ROI of Testing
• Investment to testing (resources, tools, testing environments)
• Defect related costs
– Development (lost data, RCA, fixing, building, distribution)
– Business (down-time, loss of customers, lost profit, damage to
corporate identity)
• Cost savings (investment to testing decreases defect related
costs)
ROI = Cost savings – Investment to testing
47
2.2-Economics of Testing
Time-to-market
Cost
Revenue
Support cost
Time
Time-to-market
Time-to-profit
48
2.2-Economics of Testing (2)
Time-to-profit
Cost
Revenue
Support cost
Time
Time-to-market
Time-to-profit
49
2.2-Economics of Testing (3)
Early Test Design
Cost of finding and
correcting fault
100 x
10 x
1x
0.1 x
51
2.3-Test Planning (2)
Test Planning Levels
Company level
Test Strategy
Component Acceptance
System Test Plan
Test Plan Test Plan
52
2.3-Test Planning (3)
Test Plan Contents (ANSI/IEEE 829-1998)
1. Test plan identifier
2. Introduction
3. Test items
4. Features to be tested
5. Features not to be tested
6. Approach (strategy)
7. Item pass/fail criteria
8. Suspension criteria and resumption requirements
53
2.3-Test Planning (4)
9. Test deliverables
10.Testing tasks
11.Environmental needs
12.Responsibilities
13.Staffing and training needs
14.Schedule
15.Risks and contingencies
16.Approvals
54
2.4-Component Testing
Component Testing
• First possibility to execute anything
• Different names (Unit, Module, Basic, … Testing)
• Usually done by programmer
• Should have the highest level of detail of the testing activities
55
2.4-Component Testing (2)
Component Test Process (BS 7925-2)
Fix component test plan and repeat
END
56
2.4-Component Testing (3)
Component Testing CheckList
• Algorithms and logic
• Data structures (global and local)
• Interfaces
• Independent paths
• Boundary conditions
• Error handling
57
2.5-Component Integration Testing
Component Integration Testing
• Prerequisite
– More than one (tested and accepted) component/subsystem
• Steps
– (Assemble components/subsystems)
– Test the assembled result focusing on the interfaces between the
tested components/subsystems
• Goal
– A tested subsystem/system
58
2.5-Component Integration Testing
(2)
Strategies A
B C
• Big-bang D E F G
• Top-down
• Bottom-up
H I J
• Thread integration
• Minimum capability integration
59
2.5-Component Integration Testing
(3)
A
Stubs and Drivers
A
D-E stub
B C
A driver
D E B
D E
60
2.5-Component Integration Testing
(4)
• Big-Bang Integration
+ No need for stubs or drivers
- Hard to locate faults, re-testing after fixes more extensive
• Top-Down Integration
+ Add to tested baseline (easier fault location), early display of “working”
GUI
- Need for stubs, may look more finished that it is, often crucial details left
until last
• Bottom-Up Integration
+ Good when integrating on HW and network, visibility of details
- No working system until last part added, need for drivers and stubs
61
2.5-Component Integration Testing
(5)
• Thread Integration
+ Critical processing first, early warning of performance problems
- Need for complex drivers and stubs
• Minimum Capability Integration
+ Read working (partial) system early, basic parts most tested
- Need for stubs, basic system may be big
62
2.5-Component Integration Testing
(6)
Integration Guidelines
• Integration plan determines build order
• Minimize support software
• Integrate only a small number of components at a time
• Integrate each component only once
63
2.6-System Testing (functional)
Functional System Testing
• Test of functional requirements
• The first opportunity to test the system as a whole
• Test of E2E functionality
• Two different perspectives:
– Requirements based testing
– Business process based testing
64
2.6-System Testing (functional) (2)
Requirements Based Testing
• Test cases are derived from specifications
• Most requirements require more than one test case
Business Process Based Testing
• Test cases are based on expected user profiles
• Business scenarios
• Use cases
65
2.7-System Testing (non-functional)
What is Non-Functional System Testing?
• Testing non-functional requirements
• Testing the characteristics of the system
• Types of non-functional tests:
– Load, performance, stress
– Security, usability
– Storage, volume,
– Installability, platform, recovery, etc.
– Documentation (inspections and reviews)
66
2.7-System Testing (non-functional)
(2)
Load Testing
• ”Testing conducted to evaluate the compliance of a system or
component with specified work load requirements” – BS
7925-1
• The purpose of the test, i.e. can the system:
– handle expected workload?
– handle expected workload over time?
– perform it’s tasks while handling expected workload?
• Load testing requires tools
• Load testing is used e.g. to find memory leaks or pointer
errors
67
2.7-System Testing (non-functional)
(3)
load Load Testing response time
Break point
t
68
2.7-System Testing (non-functional)
(4)
Performance/Scalability Testing
”Testing conducted to evaluate the compliance of a system or
component with specified performance requirements” – BS
7925-1
The purpose of the test, i.e. can the system:
– handle required throughput without long delays?
• Performance testing requires tools
• Performance testing is very much about team work together
with database, system, network and test administrators
69
2.7-System Testing (non-functional)
throughput
(5)
Performance Testing response time
Break point
t
70
2.7-System Testing (non-functional)
(6)
Stress/Robustness Testing
”Testing conducted to evaluate the compliance of a system or
component at or beyond the limits of its specified
requirements”
• The purpose of the test, i.e.:
– Can the system handle expected maximum (or higher) workload?
– If our expectations are wrong, can we be sure that the system
survives?
• Should be tested as early as possible
• Stress testing requires tools
71
2.7-System Testing (non-functional)
(7)
load Stress Testing response time
Break point
t
72
2.7-System Testing (non-functional)
(8)
Performance process planning
• Gather the information about how the software is being used
– Amount of users with different profiles
– Business process
– Typical user tasks (scripts)
– Virtual users’ dynamic schedule (scenarios)
• Run performance tests and monitor different parts of the
information system (LAN, servers, clients, databases, etc.)
• Get feedback to development with some suggestions
• Tune the system until the required performance is achieved
73
2.7-System Testing (non-functional)
(9)
Security Testing
“Testing whether the system meets its specified security
objectives” - BS 7925-1
• The purpose of security tests is to obtain confidence enough
that the system is secure
• Security tests can be very expensive
• Important to think about security in early project phases
• Passwords, encryption, firewalls, deleted files/information,
levels of access, etc.
74
2.7-System Testing (non-functional)
(10)
Usability Testing
“Testing the ease with which users can learn and use a product”
– BS 7925-1
• How to specify usability?
• Is it possible to test usability?
• Preferable done by end users
• Probably their first contact with the system
• The first impression must be good!
75
2.7-System Testing (non-functional)
(11)
Storage and Volume Testing
Storage Testing: “Testing whether the system meets its specified
storage objectives” – BS 7925-1
Volume Testing: “Testing where the system is subjected to large
volumes of data” – BS 7925-1
• Both test types requires test tools
76
2.7-System Testing (non-functional)
(12)
Installability Testing
“Testing concerned with the installation procedures for the
system” – BS 7925-1
• It is possible to install the system according to the installation
procedure?
• Are the procedures reasonable, clear and easy to follow?
• Is it possible to upgrade the system?
• Is it possible to uninstall the system?
77
2.7-System Testing (non-functional)
(13)
Platform (Portability) Testing
“Tests designed to determine if the software works properly on
all supported platforms/operating systems”
• Are Web applications checked by all known and used
browsers (Internet Explorer, Netscape, Mozilla, …)?
• Are applications checked on all supported versions of
operating systems (MS Windows 9x, NT, 2000, XP, Vista)?
• Are systems checked on all supported hardware platforms
(classes)?
78
2.7-System Testing (non-functional)
(14)
Recovery Testing
“Testing aimed at verifying the system’s ability to recover from
varying degrees of failure” – BS 7925-1
• Can the system recover from a software/hardware failure or
faulty data?
• How can we inject realistic faults in the system?
• Back-up functionality must be tested!
79
2.8-System Integration Testing
Purpose
• Integrate and test the compete system in its working
environment
– Communication middleware
– Other internal systems
– External systems
– Intranet, internet
– 3rd party packages
– Etc.
80
2.8-System Integration Testing (2)
Strategy
• One thing at a time
– Integrate with one other system at a time
– Integrate interfaces one at a time
• Integration sequence important
– Do most crucial systems first
– But mind external dependences
81
2.9-Acceptance Testing
What is Acceptance Testing?
“Formal testing conducted to enable a user, customer or other
authorized entity to determine whether to accept a system or
component” – BS 7925-1
• User Acceptance Testing
• Contract Acceptance Testing
• Alpha and Beta Testing
82
3-Static Testing Techniques
• Reviews and the test process
• Types of reviews
• Static analysis
83
3.1-Reviews and the Test Process
Why perform Reviews?
• Cost effective
• Problem prevention
• Involve members early
• Effective learning
• Comparison: Test execution vs. Rigorous review:
Incident report: 0.5 h Persons: 4
Debugging, fixing: 1h Reviewing: 7h
Rebuilding, delivery: 0.5 h Meetings: 3h
Re-testing and regression: 1h No. of faults found:11 h
Reporting: 0.5 h Total: 40h
Total: 3.5 h
84
3.1-Reviews and the Test Process (2)
What to Reviews?
• User Requirements
• System Specification
• Design
• Code
• User’s Guide
• Test Plan
• Test Specification (Test Cases)
85
3.1-Reviews and the Test Process (3)
Reviews in the Test Process
Re
vie
1 2 3
w
w
User
vie
vie
Acceptance Test
Requirements
Re
Re
Re
vie
w
1 2 3
w
iew
System vie System Test
Re
Requirements
v
Re
Re
vie
w
1 2 3
w
w
Design Component Test
vie
vie
Re
Re
86
3.2-Types of Review
Review Techniques Goals and Purposes
• Informal review • Verification
• Walkthrough • Validation
• Technical (peer) review
• Consensus
• Inspection
• Improvements
• Fault finding
87
3.2-Types of Review (2)
How to Reviews?
• Split big documents into smaller parts
• Samples
• Let different types of project members review
• Select a suitable review type
88
3.2-Types of Review (3)
Informal Review
• Undocumented
• Fast
• Few defined procedures
• Useful to check that the author is on track
89
3.2-Types of Review (4)
Formal Review
• Planning
• Documented
• Thorough
• Focused on a certain purpose
90
3.2-Types of Review (5)
Walkthrough
• The author explains his/her thoughts when going through the
document
• The author well prepared
• Dry runs
• Listener (from peer group) unprepared
• Consensus and selling it
91
3.2-Types of Review (6)
Technical (Peer) Review
• Find technical faults / solution
• Without management participation
• Found faults and proposed solutions get documented
• Technical experts
92
3.2-Types of Review (7)
Inspection
• Defined adaptable formal process
– Checklists
– Rules
– Metrics
– Entry/Exit Criteria
• Defined Roles
• Defined Deliverables
• Fagan and Gilb/Graham
93
3.2-Types of Review (8)
Inspection Process
• Planning
• Overview (kick-off) meeting
• Preparation (individual)
• Review meeting
• Editing
• Follow-up
• Metrics
• Process improvements
94
3.2-Types of Review (9)
Inspection - Roles and Responsibilities
• Moderator
• Author
• Reviewer / inspector
• Manager
• Review responsible / review manager / inspection leader
95
3.2-Types of Review (10)
Inspection - Pitfalls
• Lack of training
• Lack of documentation
• Lack of management support
96
3.3. Static Analysis
Static Analysis
“Analysis of a program carried out without executing the
program” – BS 7925-1
• Unreachable code
• Parameter type mismatches
• Possible array bound violations
• Faults found by compilers
• Program complexity Fault
density
Complexity
97
3.3. Static Analysis (2)
Static Analysis (2)
• % of the source code changed
• Graphical representation of code properties:
– Control flow graph
– Call tree
– Sequence diagrams
– Class diagrams
98
3.3. Static Analysis (3)
Data Flow Analysis
• Considers the use of data (works better on sequential code)
• Examples:
– Definitions with no intervening use
– Attempted use of a variable after it is killed
– Attempted use of a variable before it is defined
99
3.3. Static Analysis (4)
Static Metrics
• McCabe’s Cyclomatic complexity measure
• Lines of Code (LoC)
• Fan-out and Fan-in
• Nesting levels
100
4-Dynamic Testing Techniques
• Black and White box testing
• Black box test techniques
• White box text techniques
• Test data
• Error-Guessing
101
4.1-Black- and White-box Testing
• Strategy
– What’s the purpose of testing?
– What’s the goal of testing?
– How to reach the goal?
• Test Case Selection Methods
– Which test cases are to be executed?
– Are they good representatives of all possible test cases?
• Coverage Criteria
– How much of code (requirements, functionality) is covered?
102
4.1-Black- and White-box Testing
(2)
Test Case Selection Methods
• White-box / Structural / Logic driven
– Based on the implementation (structure of the code)
• Black-box / Functional / Data driven
– Based on the requirements (functional specifications, interfaces)
103
4.1-Black- and White-box Testing
(3)
Importance of Test Methods
Black-box
White-box
104
4.1-Black- and White-box Testing
(4)
Measuring Code Coverage
• How much of the code has been executed?
• Code Coverage Metrics:
– Segment coverage
– Call-pair coverage
• Tool Support:
– Often a good help
– For white-box tests almost a requirement
105
4.2-Black-box Test Techniques
Requirements Based Testing
• How much of the product’s features is covered by TC?
• Requirement Coverage Metrics:
Tested requirements
Requirement Coverage =
Total number of requirements
106
4.2-Black-box Test Techniques (2)
Creating Models
• Making models in general
– Used to organize information
– Often reveals problems while making the model
• Model based testing
– Test cases extracted from the model
– Examples
• Syntax testing, State transition testing, Use case based testing
– Coverage based on model used
107
4.2-Black-box Test Techniques (3)
Black-box Methods
• Equivalence Partitioning
• Boundary Value Analysis
• State Transition Testing
• Cause-Effect Graphing
• Syntax Testing
• Random Testing
108
4.2.1-Equivalence Partitioning
Equivalence Partitioning
• Identify sets of inputs under the assumption that all values in
a set are treated exactly the same by the system under test
• Make one test case for each identified set (equivalence class)
• Most fundamental test case technique
109
4.2.1-Equivalence Partitioning (2)
Equivalence Partitioning (Example)
invalid invalid valid invalid
Amount to be
withdrawn
-10 0 9 10 200 201
3. Withdrawal 4. Withdrawal
Enough money 1. Withdrawal 2. Withdrawal refused refused
in account refused granted
11
4.2.2-Boundary Value Analysis
Boundary Value Analysis
• For each identified boundary in input and output, create two
test cases. One test case on each side of the boundary but
both as close as possible to the actual boundary line.
111
4.2.2-Boundary Value Analysis (2)
Boundary Value Analysis (Example)
invalid valid invalid
Temperature
2 3 8 9
Input: Expected Output:
+20,000 red light
+8,0001 red light
8
+8,0000 green light
112
4.2.2-Boundary Value Analysis (3)
Comparison
• Error detection on common mistakes:
11
4.2.2-Boundary Value Analysis (4)
Test Objectives?
Invalid
Conditions Valid Partition Tag Invalid Partition Tag Valid Boundary Tag Tag
Boundary
115
4.2.3-State Transition Testing (2)
State Transition Testing (Example)
Blue Key
Lamps Off Green On Green Key White On
Red Key
Reset
Reset
116
4.3-White-Box Test Techniques
White-box Testing
• Test case input always derived from implementation
information (code)
• Most common implementation info:
– Statements
– Decision Points in the Code
– Usage of variables
• Expected output in test case always from requirements!
117
4.3-White-Box Test Techniques (2)
White-box Test Methods
• Statement Testing
• Branch/Decision Testing
• Data Flow Testing
• Branch Condition Testing
• Branch Condition Combination Testing
• Modified Condition Testing
• LCSAJ Testing
118
4.3-White-Box Test Techniques (3)
Control Flow Graphs
global enum control – {heating, cooling};
Read(temp)
1 void adjust_temperature()
2 BEGIN
3 Float temp;
YES
temp =< 15
4 Read(temp);
NO
5 IF (temp =< 15) air := heating
6 air := heating; YES
7 ELSE temp >= 25
8 IF (temp >= 25)
9 air:= cooling; NO
10 ENDIF
air := cooling
11 ENDIF
12 END
119
4.3.1-Statement Testing
Statement Testing
• Execute every statement in the code at least once during test
case execution
• Requires the use of tools
– Instrumentation skews performance
• Coverage
Executed statements
Statement Coverage =
Total number of statements
120
4.3.1-Statement Testing (2)
Statement Coverage
global enum control – {heating, cooling};
Read(temp)
1 void adjust_temperature()
2 BEGIN
3 Float temp;
YES
temp =< 15
4 Read(temp);
NO
5 IF (temp =< 15) air := heating
6 air := heating; YES
7 ELSE temp >= 25
8 IF (temp >= 25)
9 air:= cooling; NO
10 ENDIF
air := cooling
11 ENDIF
12 END
121
4.3.2-Branch/Decision Testing
Branch/Decision Testing
• Create test cases so that each decision in the code executes
with both true and false outcomes
– Equivalent to executing all branches
• Requires the use of tools
– Instrumentation skews performance
• Coverage
122
4.3.2-Branch/Decision Testing (2)
Decision Coverage
global enum control – {heating, cooling};
Read(temp)
1 void adjust_temperature()
2 BEGIN
3 Float temp;
YES
temp =< 15
4 Read(temp);
NO
5 IF (temp =< 15) air := heating
6 air := heating; YES
7 ELSE temp >= 25
8 IF (temp >= 25)
9 air:= cooling; NO
10 ENDIF
air := cooling
11 ENDIF
12 END
123
4.4-Test Data
Test Data Preparation
• Professional data generators
• Modified production data
• Data administration tools
• Own data generators
• Excel
• Test automation tools (test-running tools, test data
preparation tools, performance test tools, load generators,
etc.)
124
4.4-Test Data (2)
What Influences Test Data Preparation?
• Complexity and functionality of the application
• Used development tool
• Repetition of testing
• The amount of data needed
125
4.4-Test Data (3)
Recommendation for Test Data
• Don’t remove old test data
• Describe test data
• Check test data
• Expected results
126
4.5-Error Guessing
Error Guessing
• Not a structured testing technique
• Idea
– Poke around in the system using your gut feeling and previous
experience trying to find as many faults as possible
• Tips and Tricks
– Zero and its representations
– “White space”
– Matching delimiters
– Talk to people
127
5-Test Management
• Organizational structures for testing
• Configuration management
• Test Estimation, Monitoring and Control
• Incident Management
• Standards for Testing
128
5.1-Organization Structures for Testing
• Developer’s test their own code
• Development team responsibility (buddy testing)
• Tester(s) in the development team
• A dedicated team of testers
• Internal test consultants providing advice to projects
• A separate test organization
129
5.1-Organization Structures for Testing
(2)
Independence
• Who does the Testing?
– Component testing – developers
– System testing – independent test team
– Acceptance testing – users
• Independence is more important during test design than
during test execution
• The use of structured test techniques increases the
independence
• A good test strategy should mix the use of developers and
independent testers
130
5.1-Organization Structures for Testing
(3)
Specialist Skills Needed in Testing
• Test managers (test management, reporting, risk analysis)
• Test analyst (test case design)
• Test automation experts (programming test scripts)
• Test performance experts (creating test models)
• Database administrator or designer (preparing test data)
• User interface experts (usability testing))
• Test environment manager
• Test methodology experts
• Test tool experts
• Domain experts
131
5.2-Configuration Management
What Configuration Management includes?
• Identification of configuration items
• Configuration control:
– Hardware
– Software
– Documents
– Methods
– Tools
• Configuration Status Accounting
• Configuration Audit
132
5.2-Configuration Management (2)
Configuration Identification
• Identification of configuration items (CI)
• Labelling of CI’s
– Labels must be unique
– A label usually consists of two parts:
• Name, including title and number
• Version
• Naming and versioning conventions
• Identification of baselines
133
5.2-Configuration Management (3)
Configuration Control Change Control Procedure
Impl. Analysis
Approve
Decision
Reject
134
5.2-Configuration Management (4)
Configuration Status Accounting
• Recording and reporting information describing configuration
items and their status
• Information strategy
– What?
– To whom?
– When?
– How?
135
5.2-Configuration Management (5)
Configuration Audit
• Auditing of product configuration
– Maturity
– Completeness
– Compliance with requirements
– Integrity
– Accuracy
136
5.2-Configuration Management (6)
CM and Testing
What should be configuration managed?
• All test documentation and testware
• Documents that the test documentation is based on
• Test environment
• The product to be tested
Why?
• Tracebility
137
5.3-Test Estimation, Monitoring
and Control
Test Estimation
• Why does this happen time after time?
• Are we really that lousy in estimating
test?
• What can we do to avoid situations
like this?
138
5.3-Test Estimation, Monitoring
and Control (2)
Rules of Thumb
• Lines of the source code
• Windows (Web-pages) of the application
• Degree of modifications
139
5.3-Test Estimation, Monitoring
and Control (3)
Test Estimation Activities
• Identify test activities
• Estimate time for each activity
• Identify resources and skills needed
• In what order should the activities be performed?
• Identify for each activity
– Start and stop date
– Resource to do the job
140
5.3-Test Estimation, Monitoring
and Control (4)
What Makes Estimation Difficult?
• Testing is not independent
– Quality of software delivered to test?
– Quality of requirements to test?
• Faults will be found!
– How many?
– Severity?
– Time to fix?
• Test environment
• How many iterations?
141
5.3-Test Estimation, Monitoring
and Control (5)
Monitoring Test Execution Progress
• No deviations from plan
• High quality? Number of Planned test cases
– Few faults found
test cases
Tests passed
Delivery date
Time
142
5.3-Test Estimation, Monitoring
and Control (6)
Monitoring Test Execution Progress
• Problems possible to observe
– Low quality
Number of
• Software test cases Planned test cases
Tests passed
Delivery date
Time
143
5.3-Test Estimation, Monitoring
and Control (7)
Monitoring Test Execution Progress
• Changes made to improve the progress
Number of
Planned test cases
test cases
Tests run
Tests planned
Tests passed
Time
Action Old delivery New delivery
taken date date
144
5.3-Test Estimation, Monitoring
and Control (8)
Test Control
What to do when things happen that affects the test plan:
• Re-allocation of resources
• Changes to the test schedule
• Changes to the test environment
• Changes to the entry/exit criteria
• Changes to the number of test iterations
• Changes to the test suite
• Changes of the release date
145
5.4-Incident Management
What is an Incident?
• Any significant, unplanned event that occurs during testing
or any other event that requires subsequent investigation
or correction.
– Differences in actual and expected test results
– Possible causes:
• Software fault
• Expected results incorrect
• Test was not performed correctly
– Can be raised against code and/or documents
146
5.4-Incident Management (2)
Incident Information
• Unique ID of the incident report
• Test case ID and revision
• Software under test ID and revision
• Test environment ID and revision
• Name of tester
• Expected and actual results
• Severity, scope and priority
• Any other relevant information to reproduce or fix the
potential fault
147
5.4-Incident Management (3)
Incident Tracking Change Control Procedure
Approve
Decision
Reject
148
5.5-Standards for Testing
Types of Standards for Testing
• QA standards
– States that testing should be performed
• Industry-specific standards
– Specifies the level of testing
• Testing standards
– Specifies how to perform testing
149
6-Test Tools
• Types of CAST tool
• Tool Selection and Implementation
150
6.1-Types of CAST Tools
Types of CAST Tools
• Requirements Tools
• Static Analysis Tools
• Test Design Tools
• Test Data Preparation Tools
• Test-running Tools
• Test Harness & Drivers
• Performance Test Tools
• Dynamic Analysis Tools
• Debugging Tools
• Comparison Tools
• Test Management Tools
• Coverage Tools
151
6.1-Types of CAST Tools (2)
Requirement Tools
• Tools for testing requirements and requirement specifications:
– Completeness
– Consistency of terms
– Requirements model: graphs, animation
• Traceability:
– From requirements to test cases
– From test cases to requirements
– From products to requirements and test cases
• Connecting requirements tools to test management tools
152
6.1-Types of CAST Tools (3)
Static Analysis Tools
• Testing without running the code
• Compilers, more advanced syntax analysis
• Data flow faults
• Measuring complexity and other attributes
• Graphic representation of control flow
• How to use static analysis?
153
6.1-Types of CAST Tools (4)
Test Design Tools
• Most “demanding” test automation level
• Types of automated test generation:
– Test scripts from formal test specifications
– Test cases from requirements or from design specification
– Test cases from source code analysis
155
6.1-Types of CAST Tools (6)
Test-running Tools
• Tools that feed inputs and to log and compare outcomes
• Examples of test-running tools:
– “Playback”-part of C/R-tools
– For character-based applications
– For GUI (GUI objects, Bitmaps)
• External or built-in comparison (test result evaluation) facility
• Test data hard-coded or in files
• Commonly used for automated regression
• Test procedures in programmable script languages
156
6.1-Types of CAST Tools (7)
Test Harnesses & Drivers
• Test driver: a test-running tool without input generator
• Simulators and emulators are often used as drivers
• Harness: unattended running of many scripts (tools)
– Fetching of scripts and test data
– Actions for failures
– Test management
157
6.1-Types of CAST Tools (8)
Performance Test Tools
• Tools for load generation and for tracing and measurement
• Load generation:
– Input generation or driver stimulation of test object
– Level of stimulation
– Environment load
• Logging for debugging and result evaluation
• On-line and off-line performance analysis
• Graphs and load/response time relation
• Used mainly for non-functional testing… but even for “background testing”
158
6.1-Types of CAST Tools (9)
Dynamic Analysis Tools
• Run-time information on the state of executing software
• Hunting side-effects
• Memory usage (writing to “wrong memory)
• Memory leaks (allocation and de-allocation of memory)
• Unassigned pointers, pointer arithmetic
159
6.1-Types of CAST Tools (10)
Debugging Tools
• “Used mainly by programmers to reproduce bugs and
investigate the state of programs”
• Used to control program execution
• Mainly for debugging, not testing
• Debugger command scripts can support automated testing
• Debugger as execution simulator
• Different types of debugging tools
160
6.1-Types of CAST Tools (11)
Comparison Tools
• To detect differences between actual and expected results
• Expected results hard-coded or from file
• Expected results can be outputs (GUI, character-based or any
other interface)
• Results can be complex:
– On-line comparison
– Off-line comparison
161
6.1-Types of CAST Tools (12)
Test Management Tools
• Very wide category of tools and functions/activities:
– Testware management
– Monitoring, scheduling and test status control
– Test project management and planning
– Incident management
– Result analysis and reporting
162
6.1-Types of CAST Tools (13)
Coverage Tools
• Measure structural test coverage
• Pre-execution: instrumentation
• Dynamic: measurement during execution
• Various coverage measures
• Language-dependent
• Difficulties for embedded systems
• Usage: measuring the quality of tests
• Good coverage: only legs are uncovered
163
6.2-Tool Selection and
Implementation
Which Activities to Automate?
• Identify benefits and prioritise importance
• Automate administrative activities
• Automate repetitive activities
• Automate what is easy to automate (at least at the beginning)
• Automate where necessary (e.g. performance)
• NOT necessarily test execution!
• Probably NOT test design; do not “capture”
164
6.2-Tool Selection and
Implementation (2)
Automated Chaos = Faster Chaos
• Automation requires mature test process
• Successful automation requires:
– Written test specification
– Known expected outputs
– Testware Configuration Management
– Incident Management
– Relatively stable requirements
– Stable organization and test responsibility
165
6.2-Tool Selection and
Implementation (3)
Tool Choice: Criteria
• What you need now
– Detailed list of technical requirements
– Detailed list of non-technical requirements
• Long-term automation strategy of your organization
• Integration with your test process
• Integration with your other (test) tools
• Time, budget, support, training
166
6.2-Tool Selection and
Implementation (4)
Tool Selection: 4 steps
• 4 stages of the selection process according to ISEB:
1. Creation of candidate tool shortlist
2. Arranging demos
3. Evaluation of selected tools
4. Reviewing and selecting tool
167
6.2-Tool Selection and
Implementation (5)
Tool Implementation
• This is development – use your standard development
process!
• Plan resources, management support
• Support and mentoring
• Training
• Pilot project
• Early evaluation
• Publicity of early success
• Test process adjustments
168
6.2-Tool Selection and
Implementation (6)
Pilot and Roll-Out
• Pilot objectives
– Tool experience
– Identification of required test process changed
– Assessment of actual costs and benefits
• Roll-out pre-conditions
– Based on a positive pilot evaluation
– User commitment
– Project and management commitment
169
6.2-Tool Selection and
Implementation (7)
Automation Benefits
• Faster and cheaper • Liberation of human
• Better accuracy resources
• Stable test quality • Better coverage
• Automated logging and • More negative tests
reporting • “Impossible” tests
• More complete regression • Reliability and quality
estimates
170
6.2-Tool Selection and
Implementation (8)
Automation Pitfalls
• More time, money, • Too little regression
resources!
• Exotic target system
• No test process
• Incompatible with other
• Missing requirements tools
• Missing test specs • Own tools needed
• Missing outcomes • Much training
• Poor CM • Mobile test lab
• Poor incident management • To save late projects
• Defensive attitudes
171
6.2-Tool Selection and
Implementation (9)
Automation on Different Test Levels
• Component: stubs and drivers, coverage analysis
• Integration: interfaces and protocols
• System: automated test-running and performance tools
• Acceptance: requirements, installation, usability
172