Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

10 Testing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 57

Testing 1

Background
 Main objectives of a project: High Quality & High
Productivity (Q&P)
 Quality has many dimensions
 reliability, maintainability, interoperability etc.
 Reliability is perhaps the most important
 Reliability: The chances of software failing
 More defects => more chances of failure => lesser
reliability
 Hence quality goal: Have as few defects as
possible in the delivered software!

Testing 2
Faults & Failure
 Failure: A software failure occurs if the behavior of
the s/w is different from expected/specified.

 Fault: cause of software failure


 Fault = bug = defect
 Failure implies presence of defects

 A defect has the potential to cause failure.


 Definition of a defect is environment, project specific

Testing 3
Role of Testing
 Identify defects remaining after the review
processes!
 Reviews are human processes - can not catch all
defects
 There will be requirement defects, design defects
and coding defects in code
 Testing:
 Detects defects
 plays a critical role in ensuring quality.

Testing 4
Detecting defects in Testing
 During testing, a program is executed with a set of
test cases
 Failure during testing => defects are present
 No failure => confidence grows, but can not say
“defects are absent”
 Defects detected through failures
 To detect defects, must cause failures during testing

Testing 5
Test Oracle
 To check if a failure has occurred when executed with a
test case, we need to know the correct behavior
 That is we need a test oracle, which is often a human
 Human oracle makes each test case expensive as
someone has to check the correctness of its output

Testing 6
Common Test Oracles
 specifications and documentation,
 other products (for instance, an oracle for a software program
might be a second program that uses a different algorithm to
evaluate the same mathematical expression as the product under
test)
 an heuristic oracle that provides approximate results or exact
results for a set of a few test inputs,
 a statistical oracle that uses statistical characteristics,
 a consistency oracle that compares the results of one test
execution to another for similarity,
 a model-based oracle that uses the same model to generate and
verify system behavior,
 or a human being's judgment (i.e. does the program "seem" to
the user to do the correct thing?).

Testing 7
Role of Test cases
 Ideally would like the following for test cases
 No failure implies “no defects” or “high quality”
 If defects present, then some test case causes a failure
 Psychology of testing is important
 should be to ‘reveal’ defects(not to show that it works!)
 test cases must be “destructive”
 Role of test cases is clearly very critical
 Only if test cases are “good”, the confidence
increases after testing
Testing 8
Test case design
 During test planning, have to design a set of test
cases that will detect defects present
 Some criteria needed to guide test case selection
 Two approaches to design test cases
 functional or black box
 structural or white box
 Both are complimentary; we discuss a few
approaches/criteria for both

Testing 9
Psychology & Economics
ofSoftware Testing
 Historically testing was viewed as showing the system
meets itsrequirements•This has evolved to a stage
where testing is performed with theprimary aim
offinding faults rather than proving correctness. It
isperceived as a destructive ( negative )
process•Seeking to find failures (the right approach)
can be viewed ascriticism of the product and/or its
author•But looking for failures as constructive! (
useful)–Time can be saved–Risks reduced–Costs
reduced–Skills improved`

Testing 10
A Tester needs
 good communication skills–good observation skills–
people handling skills–Curiosity ( testing
activitiesinterest )–patience–reliability–Thoroughness
( carefulness)–an inquisitive nature–attention to
detail–creativity in terms of identifying likely faults–
Experience•However as with most other disciplines an
effective testteam will need amix of skillsso it is
difficult to generaliseTraits of Good Testers
 Fault found

Testing 11
A better testing approach
 Show that the system:
does what it shouldn't
 doesn't do what it shouldFastest achievement: difficult
testcasesGoal:find faults
 Success:system failsGoal:find
 faultsSuccess:system
 failsResult: fewer faults left in
 Result: fewer faults left in

Testing 12
 It is important that the Objectives of testing are clearly
understood ashumans will moderate their behaviour
accordingly (however sub-consciously ):-–“If testing is
showing thesystem meets its requirementsthen I
willjust produce tests that show this.”–“If testing is
aimed atfinding faultsthen I will be measured on this
soI will put effort into designing tests that are more
likely to findfaults.”•The Testing approach is different
from a Developer’s

Testing 13
 Testing is an extremely creative and intellectually
challenging task.
 Tests must be written for invalid and unexpected, as
well as valid and expected, input conditions”

Testing 14
Black Box testing
 Software tested to be treated as a block box
 Specification for the black box is given
 The expected behavior of the system is used to design
test cases
 Test cases are determined solely from specification.
 Internal structure of code not used for test case design

Testing 15
Black box testing…
 Premise: Expected behavior is specified.
 Hence just test for specified expected behavior
 How it is implemented is not an issue.

 For modules:
 specification produced in design specify expected
behavior
 For system testing,
 SRS specifies expected behavior

Testing 16
Black Box Testing…
 Most thorough functional testing - exhaustive
testing
 Software is designed to work for an input space
 Test the software with all elements in the input space
 Infeasible - too high a cost
 Need better method for selecting test cases
 Different approaches have been proposed

Testing 17
Equivalence Class partitioning
 A technique in black box testing. It is designed to
minimize the number of test cases by dividing tests in
such a way that the system is expected to act the same
way for all tests of each equivalence partition. Test
inputs are selected from each class. Every possible
input belongs to one and only one equivalence
partition.

Testing 18
Equivalence Class partitioning
 Divide the input space into equivalent classes
 If the software works for a test case from a class
the it is likely to work for all
 Can reduce the set of test cases if such equivalent
classes can be identified
 Getting ideal equivalent classes is impossible
 Approximate it by identifying classes for which
different behavior is specified
Example x=y mod 3
Testing 19
 In a computer store, the computer item can have a
quantity between -500 to +500. What are the
equivalence classes?

Testing 20
Equivalence Class Examples
In a computer store, the computer item can have a quantity
between -500 to +500. What are the equivalence classes?

Answer: Valid class: -500 <= QTY <= +500


Invalid class: QTY > +500
Invalid class: QTY < -500

Testing 21
Equivalence class partitioning…
 Rationale: specification requires same behavior for
elements in a class
 Software likely to be constructed such that it either
fails for all or for none.
 E.g. if a function was not designed for negative
numbers then it will fail for all the negative numbers
 For robustness, should form equivalent classes for
invalid inputs also

Testing 22
Equivalent class partitioning..
 Every condition specified as input is an equivalent class
 Define invalid equivalent classes also
 E.g. range 0< value<Max specified
 one range is the valid class
 input < 0 is an invalid class
 input > max is an invalid class
 Whenever that entire range may not be treated
uniformly - split into classes

Testing 23
Equivalence class…
 Once eq classes selected for each of the inputs, test
cases have to be selected
 Select each test case covering as many valid equivalence
classes as possible
 Or, have a test case that covers at most one valid class for
each input
 Plus a separate test case for each invalid class

Testing 24
Example
 Consider a program that takes 2 inputs – a string s and
an integer n
 Program determines n most frequent characters
 Tester believes that programmer may deal with diff
types of chars separately

 Describe a valid and invalid equivalence classes

Testing 25
Example..
Input Valid Eq Class Invalid Eq class

S 1: Contains numbers 1: non-ascii char


2: Lower case letters 2: str len > N
3: upper case letters
4: special chars
5: str len between 0-N(max)
N 6: Int in valid range 3: Int out of range

Testing 26
Example…
 Test cases (i.e. s , n) with first method
 s : str of len < N with lower case, upper case, numbers,
and special chars, and n=5
 Plus test cases for each of the invalid eq classes
 Total test cases: 1 valid+3 invalid= 4 total
 With the second approach
 A separate str for each type of char (i.e. a str of numbers,
one of lower case, …) + invalid cases
 Total test cases will be 6 + 3 = 9

Testing 27
Boundary value analysis
In boundary testing, you need to test each value in the boundary and you
know the value, you don't need to choose it from any set.

 Programs often fail on special values


 These values often lie on boundary of equivalence
classes
 Test cases that have boundary values have high
yield
 These are also called extreme cases
 A BV test case is a set of input data that lies on the
edge of a eq class of input/output

Testing 28
Boundary value analysis (cont)...
 For each equivalence class
 choose values on the edges of the class
 choose values just outside the edges
 E.g. if 0 <= x <= 1.0
 0.0 , 1.0 are edges inside
 -0.1,1.1 are just outside
 E.g. a bounded list - have a null list , a maximum value
list
 Consider outputs also and have test cases generate
outputs on the boundary

Testing 29
Boundary Value Analysis
 In BVA we determine the value of vars that should
be used
 If input is a defined range, then there are 6
boundary values plus 1 normal value (tot: 7)

Min Max
 If multiple inputs, how to combine them into test
cases; two strategies possible
 Try all possible combination of BV of diff variables, with
n vars this will have 7n test cases!
 Select BV for one var; have other vars at normal values +
1 of all normal values
Testing 30
BVA.. (test cases for two vars – x and y)

Testing 31
White box testing
 Black box testing focuses only on functionality
 What the program does; not how it is implemented
 White box testing focuses on implementation
 Aim is to exercise different program structures with the
intent of uncovering errors
 Is also called structural testing
 Various criteria exist for test case design
 Test cases have to be selected to satisfy coverage
criteria
Testing 32
Types of structural testing
 Control flow based criteria
 looks at the coverage of the control flow graph
 Data flow based testing
 looks at the coverage in the definition-use graph
 Mutation testing
 looks at various mutants of the program
 We will discuss control flow based and data flow
based criteria

Testing 33
Control flow based criteria
 Considers the program as control flow graph
 Nodes represent code blocks – i.e. set of statements
always executed together
 An edge (i,j) represents a possible transfer of control
from i to j
 Assume a start node and an end node
 A path is a sequence of nodes from start to end

Testing 34
Statement Coverage Criterion
 Criterion: Each statement is executed at least once
during testing
 I.e. set of paths executed during testing should include
all nodes
 Limitation: does not require a decision to evaluate to
false if no else clause
 E.g. : abs (x) : if ( x>=0) x = -x; return(x)
 The set of test cases {x = 0} achieves 100% statement coverage,
but error not detected
 Guaranteeing 100% coverage not always possible due to
possibility of unreachable nodes

Testing 35
Branch coverage
 Criterion: Each edge should be traversed at
least once during testing
 i.e. each decision must evaluate to both true
and false during testing
 Branch coverage implies stmt coverage
 If multiple conditions in a decision, then all
conditions need not be evaluated to T and F

Testing 36
Control flow based…
 There are other criteria too - path coverage,
predicate coverage, cyclomatic complexity based,
...
 None is sufficient to detect all types of defects (e.g.
a program missing some paths cannot be detected)
 They provide some quantitative handle on the
breadth of testing
 More used to evaluate the level of testing rather
than selecting test cases

Testing 37
Data flow-based testing
 A def-use graph is constructed from the control
flow graph
 A stmt in the control flow graph (in which each
stmt is a node) can be of these types
 Def: represents definition of a var (i.e. when var is on the
lhs)
 C-use: computational use of a var
 P-use: var used in a predicate for control transfer

Testing 38
Data flow based…
 A def-use graph is constructed by associating vars
with nodes and edges in the control flow graph
 For a node I, def(i) is the set of vars for which there is a
global def in I
 For a node I, C-use(i) is the set of vars for which there is
a global c-use in I
 For an edge, p-use(I,j) is set of vars whor which there is a
p-use for the edge (I,j)
 Def clear path from I to j wrt x: if no def of x in the
nodes in the path

Testing 39
Data flow based criteria
 all-defs: for every node I, and every x in def(i) there
is a def-clear path
 For def of every var, one of its uses (p-use or c-use) must
be tested
 all-p-uses: all p-uses of all the definitions should
be tested
 All p-uses of all the defs must be tested
 Some-c-uses, all-c-uses, some-p-uses are some
other criteria

Testing 40
Relationship between diff criteria

Testing 41
Tool support and test case selection

 Two major issues for using these criteria


 How to determine the coverage
 How to select test cases to ensure coverage
 For determining coverage - tools are essential
 Tools also tell which branches and statements are
not executed
 Test case selection is mostly manual - test plan is to
be augmented based on coverage data

Testing 42
Integration and Testing
 Incremental testing requires incremental
‘building’ I.e. incrementally integrate parts to form
system
 Integration & testing are related
 During coding, different modules are coded
separately
 Integration - the order in which they should be
tested and combined
 Integration is driven mostly by testing needs
Testing 43
Top-down and Bottom-up
 System : Hierarchy of modules
 Modules coded separately
 Integration can start from bottom or top
 Bottom-up requires test drivers
 Top-down requires stubs
 Both may be used, e.g. for user interfaces top-
down; for services bottom-up
 Drivers and stubs are code pieces written only
for testing

Testing 44
Levels of Testing
 The code contains requirement defects, design defects,
and coding defects
 Nature of defects is different for different injection
stages
 One type of testing will be unable to detect the
different types of defects
 Different levels of testing are used to uncover these
defects

Testing 45
User needs Acceptance testing

Requirement System testing


specification

Design Integration testing

code Unit testing


Testing 46
Unit Testing
 Different modules tested separately
 Focus: defects injected during coding
 Essentially a code verification technique, covered
in previous chapter
 UT is closely associated with coding
 Frequently the programmer does UT; coding phase
sometimes called “coding and unit testing”

Testing 47
Integration Testing
 Focuses on interaction of modules in a subsystem
 Unit tested modules combined to form subsystems
 Test cases to “exercise” the interaction of modules in
different ways
 May be skipped if the system is not too large

Testing 48
System Testing
 Entire software system is tested
 Focus: does the software implement the
requirements?
 Validation exercise for the system with respect
to the requirements
 Generally the final testing stage before the
software is delivered
 May be done by independent people
 Defects removed by developers
 Most time consuming test phase
Testing 49
Acceptance Testing
 Focus: Does the software satisfy user needs?
 Generally done by end users/customer in customer
environment, with real data
 Only after successful AT software is deployed
 Any defects found,are removed by developers
 Acceptance test plan is based on the acceptance
test criteria in the SRS

Testing 50
Other forms of testing
 Performance testing
 tools needed to “measure” performance
 Stress testing
 load the system to peak, load generation tools needed
 Regression testing
 test that previous functionality works alright
 important when changes are made
 Previous test records are needed for comparisons
 Prioritization of testcases needed when complete test
suite cannot be executed for a change

Testing 51
Test Plan
 Testing usually starts with test plan and ends with
acceptance testing
 Test plan is a general document that defines the
scope and approach for testing for the whole
project
 Inputs are SRS, project plan, design
 Test plan identifies what levels of testing will be
done, what units will be tested, etc in the project

Testing 52
Test Plan…
 Test plan usually contains
 Test unit specs: what units need to be tested separately
 Features to be tested: these may include functionality,
performance, usability,…
 Approach: criteria to be used, when to stop, how to
evaluate, etc
 Test deliverables
 Schedule and task allocation

Testing 53
Test case specifications
 Test plan focuses on approach; does not deal with
details of testing a unit
 Test case specification has to be done separately
for each unit
 Based on the plan (approach, features,..) test cases
are determined for a unit
 Expected outcome also needs to be specified for
each test case

Testing 54
Test case specifications…
 Together the set of test cases should detect most
of the defects
 Would like the set of test cases to detect any
defects, if it exists
 Would also like set of test cases to be small -
each test case consumes effort
 Determining a reasonable set of test case is the
most challenging task of testing

Testing 55
Test case specifications…
 The effectiveness and cost of testing depends on the set of
test cases
 Q: How to determine if a set of test cases is good? I.e. the
set will detect most of the defects, and a smaller set cannot
catch these defects
 No easy way to determine goodness; usually the set of test
cases is reviewed by experts
 This requires test cases be specified before testing – a key
reason for having test case specs
 Test case specs are essentially a table

Testing 56
Test case specifications…

Seq.No Condition Test Data


Expected successful
to be tested result

Testing 57

You might also like