Software Testing and Analysis
Software Testing and Analysis
Quality Assurance
V & V goals
Verification and validation should establish confidence that the software is fit for purpose This does NOT mean completely free of defects Rather, it must be good enough for its intended use and the type of use will determine the degree of confidence that is needed
60* to 100*
1.5* to 6*
Cost 1*
Definition
Development
Post Release
Sequential model
Requireme nts Testing/Veri Specificatio fy n Testing/Veri Planning fy Testing/Veri Maintenance
Operations Mode
Requirements specification
High-level design
Formal specification
Detailed design
Program
Prototype
Dynamic validation
V & V planning
Careful planning is required to get the most out of testing and inspection processes Planning should start early in the development process The plan should identify the balance between static verification and testing Test planning is about defining standards for the testing process rather than describing product tests
System specification
System design
Detailed design
Service
Acceptance test
Walkthroughs
Informal examination of a product (document) Made up of:
developers client next phase developers Software Quality Assurance group leader
Produces:
list of items not understood list of items thought to be incorrect
Software inspections
Involve people examining the source representation with the aim of discovering anomalies and defects Do not require execution of a system so may be used before implementation May be applied to any representation of the system (requirements, design, test data, etc.) Very effective technique for discovering errors
Inspection success
Many different defects may be discovered in a single inspection. In testing, one defect may mask another so several executions are required The reuse domain and programming knowledge so reviewers are likely to have seen the types of error that commonly arise
Program inspections
Formalised approach to document reviews Intended explicitly for defect DETECTION (not correction) Defects may be logical errors, anomalies in the code that might indicate an erroneous condition (e.g. an un-initialised variable) or non-compliance with standards
Inspection pre-conditions
A precise specification must be available Team members must be familiar with the organisation standards Syntactically correct code must be available An error checklist should be prepared Management must accept that inspection will increase costs early in the software process Management must not use inspections for staff appraisal
Inspection procedure
System overview presented to inspection team Code and associated documents are distributed to inspection team in advance Inspection takes place and discovered errors are noted Modifications are made to repair discovered errors Re-inspection may or may not be required
Inspection teams
Made up of at least 4 members Author of the code being inspected Inspector who finds errors, omissions and inconsistencies Reader who reads the code to the team Moderator who chairs the meeting and notes discovered errors Other roles are Scribe and Chief moderator
Inspection checklists
Checklist of common errors should be used to drive the inspection Error checklist is programming language dependent The 'weaker' the type checking, the larger the checklist Examples: Initialization, Constant naming, loop termination, array bounds, etc.
Inspection rate
500 statements/hour during overview 125 source statement/hour during individual preparation 90-125 statements/hour can be inspected Inspection is therefore an expensive process Inspecting 500 lines costs about 40 man/hours effort (@ $50/hr = $2000!!!)
Program testing
Can reveal the presence of errors NOT their absence A successful test is a test which discovers one or more errors The only validation technique for non-functional requirements Should be used in conjunction with static verification to provide full V&V coverage
Fault: bug incorrect piece of code Failure: result of a fault Error: mistake made by the programmer/developer
Test results
Specification
Test cases
Locate error
Repair error
Re-test program
Testing phases
Testing phases
Component testing
Testing of individual program components Usually the responsibility of the component developer (except sometimes for critical systems) Tests are derived from the developers experience
Integration testing
Testing of groups of components integrated to create a system or sub-system The responsibility of an independent testing team Tests are based on a system specification
Testing priorities
Only exhaustive testing can show a program is free from defects. However, exhaustive testing is impossible Tests should exercise a system's capabilities rather than its components Testing old capabilities is more important than testing new capabilities Testing typical situations is more important than boundary value cases
Test data
Test results
Test reports
Methods of testing
Test to specification:
Black box, Data driven Functional testing Code is ignored: only use specification document to develop test cases
Test to code:
Glass box/White box Logic driven testing Ignore specification and only examine the code.
Write a program to test if any given program is correct. The output is correct or incorrect. Test this program on itself. If output is incorrect, then how do you know the output is correct?
Conundrum, Dilemma, or Contradiction?
Black-box testing
An approach to testing where the program is considered as a black-box The program test cases are based on the system specification Test planning can begin early in the software process
Black-box testing
I n p u t s c a u s i n g a n o m a l o u s
I
e
Oe
Determine the ranges of working system Develop equivalence classes of test cases Examine the boundaries of these classes carefully
Equivalence partitioning
Input data and output results often fall into different classes where all members of a class are related Each of these classes is an equivalence partition where the program behaves in an equivalent way for each class member Test cases should be chosen from each partition
Equivalence partitioning
Invalid inputs
Valid inputs
System
Outputs
Equivalence partitions
3 4 7 11 10
Less than 4
Between 4 and 10
More than 10
quence ( T)
Sorting example
Example: sort (lst, n)
Sort a list of numbers The list is between 2 and 1000 elements
Domains:
The list has some item type (of little concern) n is an integer value (sub-range)
Equivalence classes;
n<2 n > 1000 2 <= n <= 1000
Sorting example
What do you test? Not all cases of integers Not all cases of positive integers Not all cases between 1 and 1001
Highest payoff for detecting faults is to test around the boundaries of equivalence classes. Test n=1, n=2, n=1000, n=1001, and say n= 10 Five tests versus 1000.
White-box testing
Sometime called structural testing or glass-box testing Derivation of test cases according to program structure Knowledge of the program is used to identify additional test cases Objective is to exercise all program statements (not all path combinations)
Branch coverage
All branches are tested once
White-box testing
Test data
Tests
Mid-point
Input array (T) 17 17 17, 21, 23, 29 9, 16, 18, 30, 31, 41, 45 17, 18, 21, 23, 29, 38, 41 17, 18, 21, 23, 29, 33, 38 12, 18, 21, 23, 32 21, 23, 29, 33, 38
Key (Key) 17 0 17 45 23 21 23 25
Output ( Found, L) true, 1 false, ?? true, 1 true, 7 true, 4 true, 3 true, 4 false, ??
Path testing
The objective of path testing is to ensure that the set of test cases is such that each path through the program is executed at least once The starting point for path testing is a program flow graph that shows nodes representing program decisions and arcs representing the flow of control Statements with conditions are therefore nodes in the flow graph
Cyclomatic complexity
The number of tests to test all control statements equals the cyclomatic complexity Cyclomatic complexity equals number of conditions in a program Useful if used with care. Does not imply adequacy of testing Although all paths are executed, all combinations of paths are not executed
8 5 9
Independent paths 1, 2, 3, 8, 9 1, 2, 3, 4, 6, 7, 2 1, 2, 3, 4, 5, 7, 2 1, 2, 3, 4, 6, 7, 2, 8, 9 Test cases should be derived so that all of these paths are executed A dynamic program analyser may be used to check that paths have been executed
Feasibility
Pure black box testing (specification) is realistically impossible because there are (in general) too many test cases to consider. Pure testing to code requires a test of every possible path in a flow chart. This is also (in general) infeasible. Also every path does not guarantee correctness. Normally, a combination of Black box and Glass box testing is done.
Integration testing
Tests complete systems or subsystems composed of integrated components Integration testing should be black-box testing with tests derived from the specification Main difficulty is localising errors Incremental integration testing reduces this problem
Bottom-up testing
Integrate individual components in levels until the complete system is created
Top-down testing
Level 1 Testing sequence Level 1 . ..
Level 2
Le vel 2
Level 2
Le vel 3 stubs
Bottom-up testing
Test drivers
Level N 1
Level N 1
Level N 1
More metrics
Direct measures - cost, effort, LOC, etc. Indirect Measures - functionality, quality, complexity, reliability, maintainability Size Oriented:
Lines of code - LOC Effort - person months errors/KLOC defects/KLOC cost/KLOC