Unit 4 Part A Testing and Debugging
Unit 4 Part A Testing and Debugging
Fundamentals of testing
Levels of testing
Test cases
Black box testing techniques
White box testing techniques
Introduction to selenium : Feature of
selenium, Versions of selenium, Record and
play back
1
How do you test a
program?
Input test data to the
program.
Observe the output:
Check if the program
behaved as expected.
2
How do you test a
system?
3
How do you test a
system?
If the program does not
behave as expected:
note the conditions under
which it failed.
later debug and correct.
4
Question
5
Error, Faults, and
Failures
A failure is a manifestation
of an error (aka defect or
bug).
mere presence of an error may
not lead to a failure.
6
Error, Faults, and
Failures
A fault is an incorrect state
entered during program
execution:
a variable value is different from
what it should be.
A fault may or may not not lead to
a failure.
7
Test cases and Test
suites
Test a software using a set of
carefully designed test cases:
the set of all test cases is
called the test suite
8
Test cases and Test
suites
A test case is a triplet [I,S,O]
I is the data to be input to the
system,
S is the state of the system at
which the data will be input,
O is the expected output of the
system.
9
Question
10
Verification versus
Validation
Verification is the process of
determining:
whether output of one phase of development
conforms to its previous phase.
Validation is the process of determining
whether a fully developed system
conforms to its SRS document.
11
Verification versus
Validation
Verification is concerned with
phase containment of errors,
whereas the aim of validation is
that the final product be error
free.
12
Design of Test
Cases
Exhaustive testing of any non-
trivial system is impractical:
input data domain is extremely large.
Design an optimal test suite:
of reasonable size and
uncovers as many errors as
possible.
13
Design of Test
Cases
If test cases are selected randomly:
many test cases would not contribute to the
significance of the test suite,
would not detect errors not already being
detected by other test cases in the suite.
Number of test cases in a randomly
selected test suite:
not an indication of effectiveness of testing.
14
Design of Test
Cases
Testing a system using a large number
of randomly selected test cases:
does not mean that many errors in the
system will be uncovered.
Consider an example for finding the
maximum of two integers x and y.
15
Design of Test
Cases
The code has a simple programming error:
If (x>y) max = x;
else max = x;
test suite {(x=3,y=2);(x=2,y=3)} can
detect the error,
a larger test suite {(x=3,y=2);(x=4,y=3);
(x=5,y=1)} does not detect the error.
16
Design of Test
Cases
Systematic approaches are
required to design an optimal
test suite:
each test case in the suite
should detect different errors.
17
Design of Test
Cases
There are essentially two main
approaches to design test
cases:
Black-box approach
White-box (or glass-box)
approach
18
Question
19
Testing Levels
Software products are tested
at three levels:
Unit testing
Integration testing
System testing
20
4 levels : Comes from SDLC model
Level 1 : Coding and Development Phase
Very small unit,module, function, smallest
element of program
Level 2 : Testing Phase
Level 3 : Testing Phase
Level 4 : Maintenance Phase
22
Approaches to define test cases
Black box testing: without testing internal
structure
White box testing
23
Unit testing
During unit testing, modules
are tested in isolation:
If all modules were to be tested
together:
it may not be easy to determine
which module has the error.
24
Unit testing
Unit testing reduces
debugging effort several
folds.
Programmers carry out unit
testing immediately after they
complete the coding of a
module.
25
Integration testing
After different modules of a
system have been coded and
unit tested:
modules are integrated in steps
according to an integration plan
partially integrated system is
tested at each integration step.
26
System Testing
System testing involves:
validating a fully developed
system against its
requirements.
27
Integration Testing
Develop the integration plan by
examining the structure chart :
big bang approach
top-down approach
bottom-up approach
mixed approach
28
Big bang Integration
Testing
Big bang approach is the
simplest integration testing
approach:
all the modules are simply put
together and tested.
this technique is used only for
very small systems.
29
Big bang Integration
Testing
Main problems with this approach:
if an error is found:
it is very difficult to localize the error
the error may potentially belong to any of
the modules being integrated.
debugging errors found during big
bang integration testing are very
expensive to fix.
30
Bottom-up Integration
Testing (Paytm)
Integrate and test the bottom level
modules first.
A disadvantage of bottom-up
testing:
when the system is made up of a
large number of small subsystems.
This extreme case corresponds to the
big bang approach.
31
Top-down integration
testing
Top-down integration testing starts with
the main routine:
and one or two subordinate routines in the
system.
After the top-level 'skeleton’ has been
tested:
immediate subordinate modules of the
'skeleton’ are combined with it and tested.
32
Mixed integration
testing
Mixed (or sandwiched)
integration testing:
uses both top-down and bottom-
up testing approaches.
Most common approach
33
Mixed Integration
Testing
In top-down approach:
testing waits till all top-level
modules are coded and unit
tested.
In bottom-up approach:
testing can start only after bottom
level modules are ready.
34
Question
35
36
System Testing
There are three main kinds of
system testing:
Alpha Testing
Beta Testing
Acceptance Testing
37
Alpha Testing
System testing is carried out by
the test team within the
developing organization.
38
Beta Testing
System testing performed by a
select group of friendly
customers.
39
Acceptance Testing
System testing performed by
the customer himself:
to determine whether the
system should be accepted or
rejected.
40
Performance Testing
41
Question
42
Stress Testing
Stress testing (aka endurance testing):
impose abnormal input to stress the
capabilities of the software.
Input data volume, input data rate,
processing time, utilization of memory,
etc. are tested beyond the designed
capacity.
43
Volume Testing
Addresses handling large amounts of
data in the system:
whether data structures (e.g. queues,
stacks, arrays, etc.) are large enough to
handle all possible situations
Fields, records, and files are stressed to
check if their size can accommodate all
possible data volumes.
44
Configuration Testing
Analyze system behavior:
in various hardware and software
configurations specified in the requirements
sometimes systems are built in various
configurations for different users
for instance, a minimal system may serve a
single user,
other configurations for additional users.
45
Compatibility Testing
46
Recovery Testing
47
Maintenance Testing
Verify that:
all required artifacts for
maintenance exist
they function properly
48
Documentation tests
49
Documentation tests
50
Usability tests
51
Regression Testing
52
Regression testing
53
Regression testing
54
Black-box Testing
Test cases are designed using only
functional specification of the
software:
without any knowledge of the internal
structure of the software.
For this reason, black-box testing is
also known as functional testing.
55
White-box Testing
Designing white-box test cases:
requires knowledge about the
internal structure of software.
white-box testing is also called
structural testing.
In this unit we will not study white-
box testing.
56
Black-box Testing
There are essentially two main
approaches to design black box
test cases:
Equivalence class partitioning
Boundary value analysis
57
Question
58
Equivalence Class
Partitioning
Input values to a program are
partitioned into equivalence classes.
Partitioning is done such that:
program behaves in similar ways to
every input value belonging to an
equivalence class.
59
Why define equivalence
classes?
Test the code with just one
representative value from
each equivalence class:
as good as testing using any
other values from the
equivalence classes.
60
Equivalence Class
Partitioning
How do you determine the
equivalence classes?
examine the input data.
few general guidelines for
determining the equivalence
classes can be given
61
Equivalence Class
Partitioning
If the input data to the program is
specified by a range of values:
e.g. numbers between 1 to 5000.
one valid and two invalid equivalence
classes are defined.
1 5000
62
Equivalence Class
Partitioning
If input is an enumerated set of
values:
e.g. {a,b,c}
one equivalence class for valid input
values
another equivalence class for invalid
input values should be defined.
63
Example
A program reads an input value
in the range of 1 and 5000:
computes the square root of the
input number
SQR
T
64
Example (cont.)
There are three equivalence classes:
the set of negative integers,
set of integers in the range of 1 and
5000,
integers larger than 5000.
1 5000
65
Example (cont.)
The test suite must include:
representatives from each of the
three equivalence classes:
a possible test suite can be:
{-5,500,6000}.
1 5000
66
Boundary Value
Analysis
Some typical programming errors occur:
at boundaries of equivalence classes
might be purely due to psychological
factors.
Programmers often fail to see:
special processing required at the
boundaries of equivalence classes.
67
68
Boundary Value
Analysis
Programmers may improperly
use < instead of <=
Boundary value analysis:
select test cases at the
boundaries of different
equivalence classes.
69
Example
For a function that computes the
square root of an integer in the
range of 1 and 5000:
test cases must include the
values: {0,1,5000,5001}.
1 5000
70
Compatibility testing
Example
If a system is to communicate
with a large database system to
retrieve information:
a compatibility test examines speed
and accuracy of retrieval.
71
Error Seeding
72
Error Seeding
Let:
N be the total number of errors in the
system
n of these errors be found by testing.
S be the total number of seeded
errors,
s of the seeded errors be found during
testing.
73
Error Seeding
n/N = s/S
N = S*n/s
remaining defects:
N - n = n*((S - s)/ s)
74
Example
100 errors were introduced.
90 of these errors were found
during testing
50 other errors were also found.
Remaining errors=
50 (100-90)/90 = 6
75
Selenium
“JavaScriptTestRunner“
He made JavaScriptRunner open-source which was
later re-named as Selenium Core.
Selenium
Selenium Grid
Selenium Grid was developed by Patrick Lightbod
To address the need of minimizing test execution
times as much as possible.
He initially called the system “Hosted QA.”
It captures the browser screenshots during significant
stages,