Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit 4 Coding and Testing

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 31

Unit 4

Coding and Testing


Coding is undertaken once the design phase is complete and the design documents have
been successfully reviewed. In the coding phase, every module specified in the design document
is coded and unit tested. During unit testing, each module is tested in isolation from other
modules. That is, a module is tested independently as and when its coding is complete.
After all the modules of a system have been coded and unit tested, the integration and
system testing phase is undertaken. Integration and testing of modules is carried out according to
an integration plan. The integration plan, according to which different modules are integrated
together, usually envisages integration of modules through a number of steps. During each
integration step, a number of modules are added to the partially integrated system and the
resultant system is tested. The full product takes shape only after all the modules have been
integrated together. System testing is conducted on the full product. During system testing, the
product is tested against its requirements as recorded in the SRS document.

 In a typical development organization, at any time, the maximum number of software


engineers can be found to be engaged in testing activities. It is not very surprising then
that in the software industry there is always a large demand for software test engineers.
However, many novice engineers bear the wrong impression that testing is a secondary
activity and that it is intellectually not as stimulating as the activities associated with the
other development phases. Over the years, the general perception of testing as monkeys
typing in random data and trying to crash the system has changed. Now testers are looked
upon as masters of specialized concepts, techniques, and tools.
 As we shall soon realize, testing a software product is as much challenging as initial
development activities such as specifications, design, and coding. Moreover, testing
involves a lot of creative thinking.

CODING
The input to the coding phase is the design document produced at the end of the design
phase. The detailed design is usually documented in the form of module specifications where the
data structures and algorithms for each module are specified. During the coding phase, different
modules identified in the design document are coded according to their respective module
specifications. We can describe the overall objective of the coding phase to be the following.
 The objective of the coding phase is to transform the design of a system into code in a
high-level language, and then to unit test this code.
Normally, good software development organizations require their programmers to adhere to
some well defined and standard style of coding which is called their coding standard. These
software development organizations formulate their own coding standards that suit them the
most, and require their developers to follow the standards rigorously because of the significant
business advantages it offers. The main advantages of adhering to a standard style of coding are
the following:
 A coding standard gives a uniform appearance to the codes written by different engineers.
 It facilitates code understanding and code reuse.
 It promotes good programming practices.
 It is mandatory for the programmers to follow the coding standards. Compliance of their
code to coding standards is verified during code inspection.
 Any code that does not conform to the coding standards is rejected during code review
and the code is reworked by the concerned programmer.
 In contrast, coding guidelines provide some general suggestions regarding the coding
style to be followed but leave the actual implementation of these guidelines to the
discretion of the individual developers.

Coding Standards and Guidelines


Good software development organizations usually develop their own coding standards and
guidelines depending on what suits their organization best and based on the specific types of
software they develop.
Representative coding standards
Rules for limiting the use of global:
These rules list what types of data can be declared global and what cannot, with a view to limit
the data that needs to be defined with global scope.
Standard headers for different modules:
The header of different modules should have standard format and information for ease of
understanding and maintenance. The following is an example of header format that is being used
in some companies:
 Name of the module.
 Date on which the module was created.
 Author‘s name.
 Modification history.
 Synopsis of the module. This is a small write up about what the module
 does.
 Different functions supported in the module, along with their
 Input/output parameters.
 Global variables accessed/modified by the module.

Naming conventions for global variables, local variables, and constant identifiers:
A popular naming convention is that variables are named using mixed case lettering. Global
variable names would always start with a capital letter (e.g., GlobalData) and local variable
names start with small letters (e.g., localData). Constant names should be formed using capital
letters only (e.g., CONSTDATA).
Conventions regarding error return values and exception handling mechanisms:
The way error conditions are reported by different functions in a program should be standard
within an organization. For example, all functions while encountering an error condition should
either return a 0 or 1 consistently, independent of which programmer has written the code. This
facilitates reuse and debugging.
Representative coding guidelines:
The following are some representative coding guidelines that are recommended by many
software development organizations. Wherever necessary, the rationale behind these guidelines
is also mentioned.
Do not use a coding style that is too clever or too difficult to understand:
Code should be easy to understand. Many inexperienced engineers actually take pride in writing
cryptic and incomprehensible code. Clever coding can obscure meaning of the code and reduce
code understandability; thereby making maintenance and debugging difficult and expensive.
Avoid obscure side effects:
The side effects of a function call include modifications to the parameters passed by reference,
modification of global variables, and I/O operations. An obscure side effect is one that is not
obvious from a casual examination of the code. Obscure side effects make it difficult to
understand a piece of code.
Do not use an identifier for multiple purposes:
Programmers often use the same identifier to denote several temporary entities. For example,
some programmers make use of a temporary loop variable for also computing and storing the
final result. Three variables use up three memory locations, whereas when the same variable is
used for three different purposes, only one memory location is used. However, there are several
things wrong with this approach and hence should be avoided. Some of the problems caused by
the use of a variable for multiple purposes are as follows:
 Each variable should be given a descriptive name indicating its purpose. This is not
possible if an identifier is used for multiple purposes. Use of a variable for multiple
purposes can lead to confusion and make it difficult for somebody trying to read and
understand the code.
 Use of variables for multiple purposes usually makes future enhancements more difficult.
For example, while changing the final computed result from integer to float type, the
programmer might subsequently notice that it has also been used as a temporary loop
variable that cannot be a float type.
Code should be well-documented:
As a rule of thumb, there should be at least one comment line on the average for every three
source lines of code.
Length of any function should not exceed 10 source lines:
A lengthy function is usually very difficult to understand as it probably has a large number of
variables and carries out many different types of computations. For the same reason, lengthy
functions are likely to have disproportionately larger number of bugs.
Do not use GO TO statements:
Use of GO TO statements makes a program unstructured. This makes the program very difficult
to understand, debug, and maintain.

CODE REVIEW
Testing is an effective defect removal mechanism. However, testing is applicable to only
executable code. Review is a very effective technique to remove defects from source code. In
fact, review has been acknowledged to be more cost-effective in removing defects as compared
to testing. Over the years, review techniques have become extremely popular and have been
generalized for use with other work products.
 Code review for a module is undertaken after the module successfully compiles. That is,
all the syntax errors have been eliminated from the module.
 Obviously, code review does not target to design syntax errors in a program, but is
designed to detect logical, algorithmic, and programming errors.
 Code review has been recognized as an extremely cost-effective strategy for eliminating
coding errors and for producing high quality code.
 The reason behind why code review is a much more cost-effective strategy to eliminate
errors from code compared to testing is that reviews directly detect errors.
 On the other hand, testing only helps detect failures and significant effort is needed to
locate the error during debugging.
Eliminating an error from code involves three main activities—testing, debugging, and then
correcting the errors. Testing is carried out to detect if the system fails to work satisfactorily for
certain types of inputs and under certain circumstances. Once a failure is detected, debugging is
carried out to locate the error that is causing the failure and to remove it. Of the three testing
activities, debugging is possibly the most laborious and time consuming activity. In code
inspection, errors are directly detected, thereby saving the significant effort that would have been
required to locate the error.
Normally, the following two types of reviews are carried out on the code of a module:
 Code inspection.
 Code walkthrough.
The procedures for conduction and the final objectives of these two review techniques are very
different.

Code Walkthrough
Code walkthrough is an informal code analysis technique. In this technique, a module is
taken up for review after the module has been coded, successfully compiled, and all syntax errors
have been eliminated. A few members of the development team are given the code a couple of
days before the walkthrough meeting. Each member selects some test cases and simulates
execution of the code by hand (i.e., traces the execution through different statements and
functions of the code).
 The main objective of code walkthrough is to discover the algorithmic and logical errors
in the code.

The members note down their findings of their walkthrough and discuss those in a walkthrough
meeting where the coder of the module is present. Even though code walkthrough is an informal
analysis technique, several guidelines have evolved over the years for making this naive but
useful analysis technique more effective. These guidelines are based on personal experience,
common sense, and several other subjective factors.
 The team performing code walkthrough should not be either too big or too small. Ideally,
it should consist of between three to seven members.
 Discussions should focus on discovery of errors and avoid deliberations on how to fix the
discovered errors.
 In order to foster co-operation and to avoid the feeling among the engineers that they are
being watched and evaluated in the code walkthrough meetings, managers should not
attend the walkthrough meetings.

Code Inspection
During code inspection, the code is examined for the presence of some common programming
errors.
 The principal aim of code inspection is to check for the presence of some common types
of errors that usually creep into code due to programmer mistakes and oversights and to
check whether coding standards have been adhered to.
 As an example of the type of errors detected during code inspection, consider the classic
error of writing a procedure that modifies a formal parameter and then calls it with a
constant actual parameter.
 It is more likely that such an error can be discovered by specifically looking for this kinds
of mistakes in the code, rather than by simply hand simulating execution of the code.
 In addition to the commonly made errors, adherence to coding standards is also checked
during code inspection.
 Good software development companies collect statistics regarding different types of
errors that are commonly committed by their engineers and identify the types of errors
most frequently committed.
 Such a list of commonly committed errors can be used as a checklist during code
inspection to look out for possible errors.
Following is a list of some classical programming errors which can be checked during code
inspection:
 Use of uninitialized variables.
 Jumps into loops.
 Non-terminating loops.
 Incompatible assignments.
 Array indices out of bounds.
 Improper storage allocation and deallocation.
 Mismatch between actual and formal parameter in procedure calls.
 Use of incorrect logical operators or incorrect precedence among operators.
 Improper modification of loop variables.
 Comparison of equality of floating point values.
 Dangling reference caused when the referenced memory has not been allocated.

Clean Room Testing


 Clean room testing was pioneered at IBM. This type of testing relies heavily on
walkthroughs, inspection, and formal verification.
 The programmers are not allowed to test any of their code by executing the code other
than doing some syntax testing using a compiler.
 The main problem with this approach is that testing effort is increased as walkthroughs,
inspection, and verification are time consuming for detecting all simple errors.
 Also testing- based error detection is efficient for detecting certain errors that escape
manual inspection.

SOFTWARE DOCUMENTATION
When software is developed, in addition to the executable files and the source code, several
kinds of documents such as users‘ manual, software requirements specification (SRS) document,
design document, test document, installation manual, etc.
 Good documents help enhance understandability of code. As a result, the availability of
good documents help to reduce the effort and time required for maintenance.
 Documents help the users to understand and effectively use the system.
 Good documents help to effectively tackle the manpower turnover problem. Even when
an engineer leaves the organization, and a new engineer comes in, he can build up the
required knowledge easily by referring to the documents.
 Production of good documents helps the manager to effectively track the progress of the
project. The project manager would know that some measurable progress has been
achieved, if the results of some pieces of work have been documented and the same has
been reviewed.

Different types of software documents can broadly be classified into the following:
Internal documentation: These are provided in the source code itself.
External documentation: These are the supporting documents such as SRS document,
installation document, user manual, design document, and test document.

Internal Documentation
Internal documentation is the code comprehension features provided in the source code itself.
Internal documentation can be provided in the code in several forms. The important types of
internal documentation are the following:
 Comments embedded in the source code.
 Use of meaningful variable names.
 Module and function headers.
 Code indentation.
 Code structuring (i.e., code decomposed into modules and functions).
 Use of enumerated types.
 Use of constant identifiers.
 Use of user-defined data types.

Out of these different types of internal documentation, which one is the most valuable for
understanding a piece of code?
 Careful experiments suggest that out of all types of internal documentation, a meaningful
variable name is most useful while trying to understand a piece of code.
The research finding is obviously true when comments are written without much thought. For
example, the following style of code commenting is not much of a help in understanding the
code.
a=10; /* a made 10 */
A good style of code commenting is to write to clarify certain non-obvious aspects of the
working of the code, rather than cluttering the code with trivial comments. Even when a piece of
code is carefully commented, a meaningful variable name has been found to be the most helpful
in understanding the code.

External Documentation
External documentation is provided through various types of supporting documents such as
users‘ manual, software requirements specification document, design document, test document,
etc.
 Documents are not consistent, a lot of confusion is created for somebody trying to
understand the software.
 All the documents developed for a product should be up-to-date and every change made
to the code should be reflected in the relevant external documents.
 Even if only a few documents are not up-to-date, they create inconsistency and lead to
confusion. Another important feature required for external documents is proper
understandability by the category of users for whom the document is designed. For
achieving this, Gunning‘s fog index is very useful.

Gunning’s fog index:


 Gunning‘s fog index (developed by Robert Gunning in 1952) is a metric that has been
designed to measure the readability of a document.
 The computed metric value (fog index) of a document indicates the number of years of
formal education that a person should have, in order to be able to comfortably understand
that document. That is, if a certain document has a fog index of 12, anyone who has
completed his 12th class would not have much difficulty in understanding that document.
The Gunning‘s fog index of a document D can be computed as follows:
Fog(D)=0.4*(words/sentence) + percentage of sentence having 3 or more syllables

 Observe that the fog index is computed as the sum of two different factors. The first
factor computes the average number of words per sentence (total number of words in the
document divided by the total number of sentences).
 This factor therefore accounts for the common observation that long sentences are
difficult to understand.
 The second factor measures the percentage of complex words in the document. Note that
a syllable is a group o f words that can be independently pronounced.
 For example, the word ―sentence‖ has three syllables (―sen‖, ―ten‖, and ―ce‖). Words
having more than three syllables are complex words and presence of many such words
hamper readability of a document.
Example 10.1 Consider the following sentence: ―The Gunning‘s fog index is based on
the premise that use of short sentences and simple words makes a document easy to understand.‖
Calculate its Fog index.
The fog index of the above example sentence is
0.4 (23/1) + (4/23) 100 = 26
If a users‘ manual is to be designed for use by factory workers whose educational qualification is
class 8, then the document should be written such that the Gunning‘s fog index of the document
does not exceed 8.

TESTING
 The aim of program testing is to help realiseidentify all defects in a program. However, in
practice, even after satisfactory completion of the testing phase, it is not possible to
guarantee that a program is error free.
 This is because the input data domain of most programs is very large, and it is not
practical to test the program exhaustively with respect to each value that the input can
assume. Consider a function taking a floating point number as argument.
 If a tester takes 1sec to type in a value, then even a million testers would not be able to
exhaustively test it after trying for a million numbers of years.
 Even with this obvious limitation of the testing process, we should not underestimate the
importance of testing.
 Careful testing can expose a large percentage of the defects existing in a program, and
therefore provides a practical way of reducing defects in a system.
Basic Concepts and Terminologies
How to test a program?
Testing a program involves executing the program with a set of test inputs and observing if the
program behaves as expected. If the program fails to behave as expected, then the input data and
the conditions under which it fails are noted for later debugging and error correction.

Figure 10.1: A simplified view of program testing.

Terminologies:
As is true for any specialized domain, the area of software testing has come to be associated with
its own set of terminologies. a few important terminologies that have been standardized by the
IEEE Standard Glossary of Software Engineering Terminology [IEEE90]:
 A mistake is essentially any programmer action that later shows up as an incorrect result
during program execution. A programmer may commit a mistake in almost any
development activity. For example, during coding a programmer might commit the
mistake of not initializing a certain variable, or might overlook the errors that might arise
in some exceptional situations such as division by zero in an arithmetic operation. Both
these mistakes can lead to an incorrect result.
 An error is the result of a mistake committed by a developer in any of the development
activities. Among the extremely large variety of errors that can exist in a program. One
example of an error is a call made to a wrong function.
 The terms error, fault, bug, and defect are considered to be synonyms in the area of
program testing.

Example 10.2 Can a designer‘s mistake give rise to a program error? Give an example of a
designer‘s mistake and the corresponding program error.
Answer: Yes, a designer‘s mistake give rise to a program error. For example, a requirement
might be overlooked by the designer, which can lead to it being overlooked in the code as well.
 A failure of a program essentially denotes an incorrect behavior exhibited by the
program during its execution. An incorrect behavior is observed either as an incorrect
result produced or as an inappropriate activity carried out by the program. Every failure is
caused by some bugs present in the program.

Example 10.3 Give an example of a program error that may not cause any failure.
Answer: Consider the following C program segment:

Int marklist[1:10]; /*marks list of 10 students*/


Int roll; /* roll no*/
…..
If (roll>0)
Marklist[roll]=mark;
Else
Marklist[roll]=0;

In the above code, if the variable roll assumes zero or some negative value under some
circumstances, then an array index out of bound type of error would result. However, it may be
the case that for all allowed input values the variable roll is always assigned positive values.
Then, the else clause is unreachable and no failure would occur.

 A test case is a triplet [I , S, R], where I is the data input to the program under test, S is
the state of the program at which the data is to be input, and R is the result expected to be
produced by the program.
 A n example of a test case is—[input: ―abc‖, state: edit, result: abc is displayed], which
essentially means that the input abc needs to be applied in the edit mode, and the
expected result is that the string a b c would be displayed.
 A test scenario is an abstract test case in the sense that it only identifies the aspects of the
program that are to be tested without identifying the input, state, or output. A test case
can be said to be an implementation of a test scenario. In the test case, the input, output,
and the state at which the input would be applied is designed such that the scenario can
be executed.
 A test script is an encoding of a test case as a short program. Test scripts are developed
for automated execution of the test cases.
 A test case is said to be a positive test case if it is designed to test whether the software
correctly performs a required functionality. A test case is said to be negative test case, if
it is designed to test whether the software carries out something, which is not required of
the system. As one example each of a positive test case and a negative test case, consider
a program to manage user login. A positive test case can be designed to check if a login
system validates a user with the correct user name and password. A negative test case in
this case can be a test case that checks whether the login functionality validates and
admits a user with wrong or bogus login user name or password.
 A test suite is the set of all test that have been designed by a tester to test a given
program.
 Testability of a requirement denotes the extent to which it is possible to determine
whether an implementation of the requirement conforms to it in both functionality and
performance.
 A failure mode of software denotes an observable way in which it can fail. In other
words, all failures that have similar observable symptoms constitute a failure mode. As
an example of the failure modes of software, consider a railway ticket booking software
that has three failure modes—failing to book an available seat, incorrect seat booking
(e.g., booking an already booked seat), and system crash.
 Equivalent faults denote two or more bugs that result in the system failing in the same
failure mode. As an example of equivalent faults, consider the following two faults in C
language—division by zero and illegal memory access errors. These two are equivalent
faults, since each of these leads to a program crash.
Verification versus validation
The objectives of both verification and validation techniques are very similar since both these
techniques are designed to help remove errors in software.
 Verification is the process of determining whether the output of one phase of software
development conforms to that of its previous phase; whereas validation is the process of
determining whether fully developed software conforms to its requirements specification.
For example, a verification step can be to check if the design documents produced after
the design step conform to the requirements specification. On the other hand, validation is
applied to the fully developed and integrated software to check if it satisfies the
customer‘s requirements.
 The primary techniques used for verification include review, simulation, formal
verification, and testing. Review, simulation, and testing are usually considered as
informal verification techniques. Formal verification usually involves use of theorem
proving techniques or use of automated tools such as a model checker. On the other hand,
validation techniques are primarily based on product testing. Note that we have
categorized testing both under program verification and validation. The reason being that
unit and integration testing can be considered as verification steps where it is verified
whether the code is a s per the module and module interface specifications. On the other
hand, system testing can be considered as a validation step where it is determined
whether the fully developed code is as per its requirements specification.
 Verification does not require execution of the software, whereas validation requires
execution of the software.
 Verification is carried out during the development process to check if the development
activities are proceeding alright, whereas validation is carried out to check if the right as
required by the customer has been developed.
 We can therefore say that the primary objective of the verification steps are to determine
whether the steps in product development are being carried out alright, whereas
validation is carried out towards the end of the development process to determine
whether the right product has been developed.
 While verification is concerned with phase containment of errors, the aim of validation is
to check whether the deliverable software is error free.
 To achieve high product reliability in a cost-effective manner, a development team needs
to perform both verification and validation activities. The activities involved in these two
types of bug detection techniques together are called the ―V and V‖ activities.

Error detection techniques = Verification techniques + Validation techniques

Example 10.5 Is it at all possible to develop highly reliable software, using validation techniques
alone? If so, can we say that all verification techniques are redundant?
Answer: It is possible to develop highly reliable software using validation techniques alone.
However, this would cause the development cost to increase drastically. Verification techniques
help achieve phase containment of errors and provide a means to cost-effectively remove bugs.
Testing Activities

Testing involves performing the following main activities:

Test suite design:


The set of test cases using which a program is to be tested is designed possibly using several test
case design techniques.
Running test cases and checking the results to detect failures:
Each test case is run and the results are compared with the expected results. A mismatch between
the actual result and expected results indicates a failure.
Locate error:
In this activity, the failure symptoms are analyzed to locate the errors. For each failure observed
during the previous activity, the statements that are in error are identified.
Error correction:
After the error is located during debugging, the code is appropriately changed to correct the
error.

Figure 10.2: Testing process.

Why Design Test Cases?

When test cases are designed based on random input data, many of the test cases do not
contribute to the significance of the test suite, That is, they do not help detect any additional
defects not already being detected by other test cases in the suite. Testing software using a large
collection of randomly selected test cases does not guarantee that all (or even most) of the errors
in the system will be uncovered. Let us try to understand why the number of random test cases in
a test suite, in general, does not indicate of the effectiveness of testing. Consider the following
example code segment which determines the greater of two integer values x and y. This code
segment has a simple programming error:

if (x>y) max = x;
else max = x;

For the given code segment, the test suite {(x=3,y=2);(x=2,y=3)} can detect the error, whereas a
larger test suite {(x=3,y=2);(x=4,y=3); (x=5,y=1)} does not detect the error. All the test cases in
the larger test suite help detect the same error, while the other error in the code remains
undetected. So, it would be incorrect to say that a larger test suite would always detect more
errors than a smaller one, unless of course the larger test suite has also been carefully designed.
This implies that for effective testing, the test suite should be carefully designed rather than
picked randomly. A minimal test suite is a carefully designed set of test cases such that each test
case helps detect different errors. This is in contrast to testing using some random input values.
 Black-box approach
 White-box (or glass-box) approach
 In the black-box approach, test cases are designed using only the functional specification
of the software.
 That is, test cases are designed solely based on an analysis of the input/out behavior (that
is, functional behavior) and does not require any knowledge of the internal structure of a
program. For this reason, black-box testing is also known as functional testing. On the
other hand, designing white-box test cases requires a thorough knowledge of the internal
structure of a program, and therefore white-box testing is also called structural testing.
 Black- box test cases are designed solely based on the input-output behavior of a
program.

Testing in the Large versus Testing in the Small


A software product is normally tested in three levels or stages:
 Unit testing
 Integration testing
 System testing
During unit testing, the individual functions (or units) of a program are tested.
 Unit testing is referred to as testing in the small, whereas integration and system testing
are referred to as testing in the large.
 After testing all the units individually, the units are slowly integrated and tested after
each step of integration (integration testing).
 Finally, the fully integrated system is tested (system testing). Integration and system
testing are known as testing in the large.
 Often beginners ask the question—―Why test each module (unit) in isolation first, then
integrate these modules and test, and again test the integrated set of modules—why not
just test the integrated set of modules once thoroughly?‖ The answer to this question is the
following—there are two main reasons to it.
 First while testing a module, other modules with which this module needs to interface
may not be ready. Moreover, it is always a good idea to first test the module in isolation
before integration because it makes debugging easier.
 If a failure is detected when an integrated set of modules is being tested, it would be
difficult to determine which module exactly has the error.
UNIT TESTING
Unit testing is undertaken after a module has been coded and reviewed. This activity is typically
undertaken by the coder of the module himself in the coding phase. Before carrying out unit
testing, the unit test cases have to be designed and the test environment for the unit under test has
to be developed.
Driver and stub modules
In order to test a single module, we need a complete environment to provide all relevant code
that is necessary for execution of the module. That is, besides the module under test, the
following are needed to test the module:
 The procedures belonging to other modules that the module under test calls.
 Non-local data structures that the module accesses.
 A procedure to call the functions of the module under test with appropriate parameters.
Modules required to provide the necessary environment (which either call or are called by the
module under test) are usually not available until they too have been unit tested. In this context,
stubs and drivers are designed to provide the complete environment for a module so that testing
can be carried out.
Stub:
The role of stub and driver modules is pictorially shown in Figure 10.3. A stub procedure is a
dummy procedure that has the same I/O parameters as the function called by the unit under test
but has a highly simplified behavior. For example, a stub procedure may produce the expected
behavior using a simple table look up mechanism.

Figure 10.3: Unit testing with the help of driver and stub modules.

Driver:
A driver module should contain the non-local data structures accessed by the module under test.
Additionally, it should also have the code to call the different functions of the unit under test
with appropriate parameter values for testing.
BLACK-BOX TESTING
In black-box testing, test cases are designed from an examination of the input/output values only
and no knowledge of design or code is required. The following are the two main approaches
available to design black box test cases:
 Equivalence class partitioning
 Boundary value analysis

Equivalence Class Partitioning


In the equivalence class partitioning approach, the domain of input values to the program under
test is partitioned into a set of equivalence classes. The partitioning is done such that for every
input data belonging to the same equivalence class, the program behaves similarly. The main
idea behind defining equivalence classes of input data is that testing the code with any one value
belonging to an equivalence class is as good as testing the code with any other value belonging to
the same equivalence class. Equivalence classes for a unit under test can be designed by
examining the input data and output data.
1. If the input data values to a system can be specified by a range of values, then one valid
and two invalid equivalence classes need to be defined. For example, if the equivalence
class is the set of integers in the range 1 to 10 (i.e., [1,10]), then the invalid equivalence
classes are [−∞,0], [11,+∞].
2. If the input data assumes values from a set of discrete members of some domain, then one
equivalence class for the valid input values and another equivalence class for the invalid
input values should be defined. For example, if the valid equivalence classes are
{A,B,C}, then the invalid equivalence class is -{A,B,C}, where is the universe of
possible input values.
Example 10.6
For a software that computes the square root of an input integer that can assume values in the
range of 0 and 5000. Determine the equivalence classes and the black box test suite.
Answer: There are three equivalence classes—The set of negative integers, the set of integers in
the range of 0 and 5000, and the set of integers larger than 5000. Therefore, the test cases must
include representatives for each of the three equivalence classes. A possible test suite can be: {–
5,500,6000}.
Example 10.7 Design the equivalence class test cases for a program that reads two integer pairs
(m1, c1) and (m2, c2) defining two straight lines of the form y=mx+c. The program computes
the intersection point of the two straight lines and displays the point of intersection.
Answer: The equivalence classes are the following:
• Parallel lines (m1 = m2, c1 c2)
• Intersecting lines (m1 m2)
• Coincident lines (m1 = m2, c1 = c2)
Now, selecting one representative value from each equivalence class, we get the required
equivalence class test suite {(2,2)(2,5),(5,5)(7,7), (10,10) (10,10)}.
Example 10.8 Design equivalence class partitioning test suite for a function that reads a
character string of size less than five characters and displays whether it is a palindrome.
Answer: The equivalence classes are the leaf level classes shown in Figure 10.4. The
equivalence classes are palindromes, non-palindromes, and invalid inputs. Now, selecting one
representative value from each equivalence class, we have the required test suite:
{abc,aba,abcdef}.
Figure 10.4: Equivalence classes for Example 10.6.

Boundary Value Analysis


A type of programming error that is frequently committed by programmers is missing out on the
special consideration that should be given to the values at the boundaries of different equivalence
classes of inputs. For example, programmers may improperly use < instead of <=, or conversely
<= for <, etc. Boundary value analysis-based test suite design involves designing test cases using
the values at the boundaries of different equivalence classes. To design boundary value test
cases, it is required to examine the equivalence classes to check if any of the equivalence classes
contains a range of values. For those equivalence classes that are not a range of values (i.e.,
consist of a discrete collection of values) no boundary value test cases can be defined. For an
equivalence class that is a range of values, the boundary values need to be included in the test
suite. For example, if an equivalence class contains the integers in the range 1 to 10, then the
boundary value test suite is {0,1,10,11}.
Example 10.9 For a function that computes the square root of the integer values in the range of 0
and 5000, determine the boundary value test suite.
Answer: There are three equivalence classes—The set of negative integers, the set of integers in
the range of 0 and 5000, and the set of integers larger than 5000. The boundary value-based test
suite is: {0,-1,5000,5001}.
Example 10.10 Design boundary value test suite for the function described in Example 10.6.
Answer: The equivalence classes have been showed in Figure 10.5. There is a boundary between
the valid and invalid equivalence classes. Thus, the boundary value test suite is {abcdefg,
abcdef}.

Figure 10.5: CFG for (a) sequence, (b) selection, and (c) iteration type of constructs.

Summary of the Black-box Test Suite Design Approach


 We now summarize the important steps in the black-box test suite design approach:
 Examine the input and output values of the program.
 Identify the equivalence classes.
 Design equivalence class test cases by picking one representative value from each
equivalence class.
 Design the boundary value test cases as follows. Examine if any equivalence class is a
range of values. Include the values at the boundaries of such equivalence classes in the
test suite.
The strategy for black-box testing is intuitive and simple. For black-box testing, the most
important step is the identification of the equivalence classes.
WHITE-BOX TESTING
White-box testing is an important type of unit testing. A large number of white-box testing
strategies exist. Each testing strategy essentially designs test cases based on analysis of some
aspect of source code and is based on some heuristic.

Basic Concepts
A white-box testing strategy can either be coverage-based or fault based.
Fault-based testing
A fault-based testing strategy targets to detect certain types of faults. These faults that a test
strategy focuses on constitute the fault model of the strategy. An example of a fault-based
strategy is mutation testing, which is discussed later in this section.
Coverage-based testing
A coverage-based testing strategy attempts to execute (or cover) certain elements of a program.
Popular examples of coverage-based testing strategies are statement coverage, branch coverage,
multiple condition coverage, and path coverage-based testing.
Testing criterion for coverage-based testing
A coverage-based testing strategy typically targets to execute (i.e., cover) certain program
elements for discovering failures. The set of specific program elements that a testing strategy
targets to execute is called the testing criterion of the strategy. For example, if a testing strategy
requires all the statements of a program to be executed at least once

Stronger versus weaker testing


 A white-box testing strategy is said to be stronger than another strategy, if the stronger
testing strategy covers all program elements covered by the weaker testing strategy, and
the stronger strategy additionally covers at least one program element that is not covered
by the weaker strategy.
 If a stronger testing has been performed, then a weaker testing need not be
carried out.

Figure 10.6: Illustration of stronger, weaker, and complementary testing strategies.


We need to point out that coverage-based testing is frequently used to check the quality of testing
achieved by a test suite. It is hard to manually design a test suite to achieve a specific coverage
for a non-trivial program.

Statement Coverage
The principal idea governing the statement coverage strategy is that unless a statement is
executed, there is no way to determine whether an error exists in that statement.

Example 10.11 Design statement coverage-based test suite for the following Euclid‘s GCD
computation program:

int computeGCD(x,y)
int x,y;
{
1 while (x != y){
2 if (x>y) then
3 x=x-y;
4 else y=y-x;
5}
6 return x;
}
Answer: To design the test cases for the statement coverage, the conditional expression of the
while statement needs to be made true and the conditional expression of the if statement needs to
be made both true and false. By choosing the test set {(x = 3, y = 3), (x = 4, y = 3), (x = 3, y =
4)}, all statements of the program would be executed at least once.

Branch Coverage
A test suite satisfies branch coverage, if it makes each branch condition in the program to assume
true and false values in turn. In other words, for branch coverage each branch in the CFG
representation of the program must be taken at least once, when the test suite is executed. Branch
testing is also known as edge testing, since in this testing scheme, each edge of a program‘s
control flow graph is traversed atleast once.
Theorem 10.1 Branch coverage-based testing is stronger than statement coverage-based testing.
Proof: We need to show that (a) branch coverage ensures statement coverage, and (b) statement
coverage does not ensure branch coverage.
(a) Branch testing would guarantee statement coverage since every statement must belong to
some branch (assuming that there is no unreachable code).
(b) To show that statement coverage does not ensure branch coverage, it would be sufficient to
give an example of a test suite that achieves statement coverage, but does not cover at least one
branch. Consider the following code, and the test suite {5}.
if(x>2) x+=1;
The test suite would achieve statement coverage. However, it does not achieve branch coverage,
since the condition (x > 2) is not made false by any test case in the suite.

Multiple Condition Coverage


In the multiple conditions (MC) coverage-based testing, test cases are designed to make each
component of a composite conditional expression to assume both true and false values. For
example, consider the composite conditional expression ((c1 .and.c2 ).or.c3). A test suite would
achieve MC coverage, if all the component conditions c1, c2 and c3 are each made to assume
both true and false values.
Example 10.13 Give an example of a fault that is detected by multiple condition coverage, but
not by branch coverage.
Answer: Consider the following C program segment:

Path Coverage
A test suite achieves path coverage if it executes each linearly independent paths ( o r basis paths
) at least once. A linearly independent path can be defined in terms of the control flow graph
(CFG) of a program. Therefore, to understand path coverage-based testing strategy, we need to
first understand how the CFG of a program can be drawn.

Control flow graph (CFG)


A control flow graph describes the sequence in which the different instructions of a program get
executed. In order to draw the control flow graph of a program, we need to first number all the
statements of a program. The different numbered statements serve as nodes of the control flow
graph (see Figure 10.5). There exists an edge from one node to another, if the execution of the
statement representing the first node can result in the transfer of control to the other node. More
formally, we can define a CFG as follows.
 A CFG is a directed graph consisting of a set of nodes and edges (N, E), such that each
node n N corresponds to a unique program statement and an edge exists between two
nodes if control can transfer from one node to the other.
 We can easily draw the CFG for any program, if we know how to represent the sequence,
selection, and iteration types of statements in the CFG. After all, every program is
constructed by using these three types of constructs only.
 The CFG representation of the sequence and decision types of statements is straight
forward. Please note carefully how the CFG for the loop (iteration) construct can be
drawn.
 For iteration type of constructs such as the while construct, the loop condition is tested
only at the beginning of the loop and therefore always control flows from the last
statement of the loop to the top of the loop.
Figure 10.7: Control flow diagram of an example program.

Path
A path through a program is any node and edge sequence from the start node to a terminal node
of the control flow graph of a program.
Linearly independent set of paths (or basis path set)
If a set of paths is linearly independent of each other, then no path in the set can be obtained
through any linear operations (i.e., additions or subtractions) on the other paths in the set. Even
though it is straight forward to identify the linearly independent paths for simple programs, for
more complex programs it is not easy to determine the number of independent paths. In this
context, McCabe‘s cyclomatic complexity metric is an important result that lets us compute the
number of linearly independent paths for any arbitrary program.

McCabe’s Cyclomatic Complexity Metric


McCabe obtained his results by applying graph-theoretic techniques to the control flow graph of
a program. McCabe‘s cyclomatic complexity defines an upper bound on the number of
independent paths in a program.
Method 1: Given a control flow graph G of a program, the cyclomatic complexity V(G) can be
computed as:
V (G) = E – N + 2
Where, N is the number of nodes of the control flow graph and E is the number of edges in the
control flow graph. For the CFG of example shown in Figure 10.7, E = 7 and N = 6. Therefore,
the value of the Cyclomatic complexity = 7 – 6 + 2 = 3.
Method 2: An alternate way of computing the cyclomatic complexity of a program is based on a
visual inspection of the control flow graph is as follows
—In this method, the cyclomatic complexity V (G) for a graph G is given by the following
expression:
V(G) = Total number of non-overlapping bounded areas + 1
In the program‘s control flow graph G, any region enclosed by nodes and edges can be called as
a bounded area. This is an easy way to determine the McCabe‘s cyclomatic complexity. But,
what if the graph G is not planar (i.e., however you draw the graph, two or more edges always
intersect). Actually, it can be shown that control flow representation of structured programs
always yields planar graphs. But, presence of GOTO‘s can easily add intersecting edges.
Therefore, for non-structured programs, this way of computing the McCabe‘s cyclomatic
complexity does not apply.
Method 3: The cyclomatic complexity of a program can also be easily computed by computing
the number of decision and loop statements of the program. If N is the number of decision and
loop statements of a program, then the McCabe‘s metric is equal to N + 1.

How is path testing carried out by using computed McCabe’s cyclomatic metric value?
For the CFG of a moderately complex program segment of say 20 nodes and 25 edges, you may
need several days of effort to identify all the linearly independent paths in it and to design the
test cases. It is therefore impractical to require the test designers to identify all the linearly
independent paths in a code, and then design the test cases to force execution along each of the
identified paths. In practice, for path testing, usually the tester keeps on forming test cases with
random data and executes those until the required coverage is achieved.

Steps to carry out path coverage-based testing


The following is the sequence of steps that need to be undertaken for deriving the path coverage-
based test cases for a program:
1. Draw control flow graph for the program.
2. Determine the McCabe‘s metric V(G).
3. Determine the cyclomatic complexity. This gives the minimum number of test cases required
to achieve path coverage.
4. Repeat

Uses of McCabe’s cyclomatic complexity metric


Beside its use in path testing, cyclomatic complexity of programs has many other interesting
applications such as the following:
Estimation of structural complexity of code:
McCabe‘s cyclomatic complexity is a measure of the structural complexity of a program. The
reason for this is that it is computed based on the code structure (number of decision and
iteration constructs used). Cyclomatic complexity of a program is a measure of the psychological
complexity or the level of difficulty in understanding the program. Good software development
organizations usually restrict the cyclomatic complexity of different functions to a maximum
value of ten or so. This is in contrast to the computational complexity that is based on the
execution of the program statements.
Estimation of testing effort:
Cyclomatic complexity is a measure of the maximum number of basis paths. Thus, it indicates
the minimum number of test cases required to achieve path coverage. Therefore, the testing
effort and the time required to test a piece of code satisfactorily is proportional to the cyclomatic
complexity of the code. To reduce testing effort, it is necessary to restrict the cyclomatic
complexity of every function to seven.
Estimation of program reliability:
Experimental studies indicate there exists a clear relationship between the McCabe‘s metric and
the number of errors latent in the code after testing. This relationship exists possibly due to the
correlation of cyclomatic complexity with the structural complexity of code. Usually the larger is
the structural complexity, the more difficult it is to test and debug the code.

Data Flow-based Testing


Data flow based testing method selects test paths of a program according to the definitions and
uses of different variables in a program.
Consider a program P. For a statement numbered S of P, let

DEF(S) = {X /statement S contains a definition of X } and


USES(S)= {X /statement S contains a use of X }
For the statement S: a=b+c; DEF(S)={a}, USES(S)={b, c}. The definition of variable X at
statement S is said to be live at statement S , if there exists a path from statement S to statement
S1 which does not contain any definition of X .

Mutation Testing
 All white-box testing strategies that we have discussed so far, are coverage-based testing
techniques. In contrast, mutation testing is a fault-based testing technique in the sense that
mutation test cases are designed to help detect specific types of faults in a program.
 In mutation testing, a program is first tested by using an initial test suite designed by
using various white box testing strategies that we have discussed.
 After the initial testing is complete, mutation testing can be taken up. The idea behind
mutation testing is to make a few arbitrary changes to a program at a time.
 Each time the program is changed, it is called a mutated program and the change effected
is called a mutant.
 An underlying assumption behind mutation testing is that all programming errors can be
expressed as a combination of simple errors. A mutation operator makes specific changes
to a program.

DEBUGGING
After a failure has been detected, it is necessary to first identify the program statement(s) that are
in error and are responsible for the failure, the error can then be fixed.

Debugging Approaches
The following are some of the approaches that are popularly adopted by the programmers for
debugging:
Brute force method
This is the most common method of debugging but is the least efficient method. Single stepping
using a symbolic debugger is another form of this approach, where the developer mentally
computes the expected result after every source instruction and checks whether the same is
computed by single stepping through the program.
Backtracking
This is also a fairly common approach. In this approach, starting from the statement at which an
error symptom has been observed, the source code is traced backwards until the error is
discovered. Unfortunately, as the number of source lines to be traced back increases, the number
of potential backward paths increases and may become unmanageably large for complex
programs, limiting the use of this approach.
Cause elimination method
In this approach, once a failure is observed, the symptoms of the failure (i.e., certain variable is
having a negative value though it should be positive, etc.) are noted. Based on the failure
symptoms, the causes which could possibly have contributed to the symptom is developed and
tests are conducted to eliminate each. A related technique of identification of the error from the
error symptom is the software fault tree analysis.
Program slicing
This technique is similar to back tracking. In the backtracking approach, one often has to
examine a large number of statements. However, the search space is reduced by defining slices.
A slice of a program for a particular variable and at a particular statement is the set of source
lines preceding this statement that can influence the value of that variable [Mund2002]. Program
slicing makes use of the fact that an error in the value of a variable can be caused by the
statements on which it is data dependent.
Debugging Guidelines
Debugging is often carried out by programmers based on their ingenuity and experience. The
following are some general guidelines for effective debugging:
 Many times debugging requires a thorough understanding of the program design. Trying
to debug based on a partial understanding of the program design may require an
inordinate amount of effort to be put into debugging even for simple problems.
 Debugging may sometimes even require full redesign of the system. In such cases, a
common mistake that novice programmers often make is attempting not to fix the error
but its symptoms.
 One must be beware of the possibility that an error correction may introduce new errors.
Therefore after every round of error-fixing, regression testing (see Section 10.13) must be
carried out.

PROGRAM ANALYSIS TOOLS


A program analysis tool usually is an automated tool that takes either the source code or the
executable code of a program as input and produces reports regarding several important
characteristics of the program, such as its size, complexity, adequacy of commenting, adherence
to programming standards, adequacy of testing, etc. We can classify various program analysis
tools into the following two broad categories:
 Static analysis tools
 Dynamic analysis tools
Static Analysis Tools
Static program analysis tools assess and compute various characteristics of a program without
executing it. Typically, static analysis tools analyze the source code to compute certain metrics
characterizing the source code (such as size, cyclomatic complexity, etc.) and also report certain
analytical conclusions.
 To what extent the coding standards have been adhered to?
 Whether certain programming errors such as uninitialized variables, mismatch between
actual and formal parameters, variables that are declared but never used, etc., exist? A list
of all such errors is displayed.

In a high level programming languages, pointer variables and dynamic memory allocation
provide the capability for dynamic memory references. However, dynamic memory referencing
is a major source of programming errors in a program. Static analysis tools often summarize the
results of analysis of every function in a polar chart known as Kiviat Chart. A Kiviat Chart
typically shows the analyzed values for cyclomatic complexity, number of source lines,
Percentage of comment lines, Halstead‘s metrics, etc.

Dynamic Analysis Tools


Dynamic program analysis tools can be used to evaluate several program A dynamic program
analysis tool (also called a dynamic analyser ) usually collects execution trace information by
instrumenting the code. Code instrumentation is usually achieved by inserting additional
statements to print the values of certain variables into a file to collect the execution trace of the
program. The instrumented code when executed, records the behaviour of the software for
different test cases.
 An important characteristic of a test suite that is computed by a dynamic analysis tool is
the extent of coverage achieved by the test suite.
After software has been tested with its full test suite and its behavior recorded, the dynamic
analysis tool carries out a post execution analysis and produces reports which describe the
coverage that has been achieved by the complete test suite for the program. For example, the
dynamic analysis tool can report the statement, branch, and path coverage achieved by a test
suite. If the coverage achieved is not satisfactory more test cases can be designed, added to the
test suite, and run. Further, dynamic analysis results can help eliminate redundant test cases from
a test suite. Normally the dynamic analysis results are reported in the form of a histogram or pie
chart to describe the structural coverage achieved for different modules of the program. The
output of a dynamic analysis tool can be stored and printed easily to provide evidence that
thorough testing has been carried out.

INTEGRATION TESTING
Integration testing is carried out after all (or at least some of ) the modules have been unit tested.
Successful completion of unit testing, to a large extent, ensures that the unit (or module) as a
whole works satisfactorily. In this context, the objective of integration testing is to detect the
errors at the module interfaces (call parameters).
 For example, it is checked that no parameter mismatch occurs when one module invokes
the functionality of another module.
 Thus, the primary objective of integration testing is to test the module interfaces, i.e.,
there are no errors in parameter passing, when one module invokes the functionality of
another module.
 The objective of integration testing is to check whether the different modules of a
program interface with each other properly.
 During integration testing, different modules of a system are integrated in a planned
manner using an integration plan. The integration plan specifies the steps and the order in
which modules are combined to realize the full system.
After each integration step, the partially integrated system is tested. Following approaches can be
used to develop the test plan:
 Big-bang approach to integration testing
 Top-down approach to integration testing
 Bottom-up approach to integration testing
 Mixed (also called sandwiched ) approach to integration testing

Big-bang approach to integration testing


 Big-bang testing is the most obvious approach to integration testing. In this approach, all
the modules making up a system are integrated in a single step.
 In simple words, all the unit tested modules of the system are simply linked together and
tested. However, this technique can meaningfully be used only for very small systems.
 The main problem with this approach is that once a failure has been detected during
integration testing, it is very difficult to localize the error as the error may potentially lie
in any of the modules.
 Therefore, debugging errors reported during big-bang integration testing are very
expensive to fix. As a result, big-bang integration testing is almost never used for large
programs.
Bottom-up approach to integration testing
 Large software products are often made up of several subsystems. A subsystem might
consist of many modules which communicate among each other through well-defined
interfaces.
 In bottom-up integration testing, first the modules for the each subsystem are integrated.
Thus, the subsystems can be integrated separately and independently.
 The primary purpose of carrying out the integration testing a subsystem is to test whether
the interfaces among various modules making up the subsystem work satisfactorily.
Top-down approach to integration testing
 Top-down integration testing starts with the root module in the structure chart and one or
two subordinate modules of the root module.
 After the top-level ‗skeleton‘ has been tested, the modules that are at the immediately
lower layer of the ‗skeleton‘ are combined with it and tested.
 Top-down integration testing approach requires the use of program stubs to simulate the
effect of lower-level routines that are called by the routines under test.
 A pure top-down integration does not require any driver routines. An advantage of top-
down integration testing is that it requires writing only stubs, and stubs are simpler to
write compared to drivers.
 A disadvantage of the top-down integration testing approach is that in the absence of
lower-level routines, it becomes difficult to exercise the top-level routines in the desired
manner since the lower level routines usually perform input/output (I/O) operations.

Mixed approach to integration testing


 The mixed (also called sandwiched) integration testing follows a combination of top-
down and bottom-up testing approaches.
 In top down approach, testing can start only after the top-level modules have been coded
and unit tested. Similarly, bottom-up testing can start only after the bottom level modules
are ready. The mixed approach overcomes this shortcoming of the top down and bottom-
up approaches. In the mixed testing approach, testing can start as and when modules
become available after unit testing.

10.10.1 Phased versus Incremental Integration Testing


Big-bang integration testing is carried out in a single step of integration. In contrast, in the other
strategies, integration is carried out over several steps. In these later strategies, modules can be
integrated either in a phased or incremental manner. A comparison of these two strategies is as
follows: In incremental integration testing, only one new module is added to the partially
integrated system each time. In phased integration, a group of related modules are added to the
partial system each time.

TESTING OBJECT-ORIENTED PROGRAMS


During the initial years of object-oriented programming, it was believed that object-orientation
would, to a great extent, reduce the cost and effort incurred on testing. This thinking was based
on the observation that object-orientation incorporates several good programming features such
as encapsulation, abstraction, reuse through inheritance, polymorphism, etc., thereby chances of
errors in the code is minimized. However, it was soon realized that satisfactory testing object-
oriented programs is much more difficult and requires much more cost and effort as compared to
testing similar procedural programs. The main reason behind this situation is that various object-
oriented features introduce additional complications and scope of new types of bugs that are
present in procedural programs. Therefore additional test cases are needed to be designed to
detect these.
What is a Suitable Unit for Testing
Object-oriented Programs?
in an object-oriented program are analogous to procedures in a procedural program, can we then
consider the methods of object-oriented programs as the basic unit of testing? Weyuker studied
this issue and postulated his anti composition axiom as follows: Adequate testing of individual
methods does not ensure that a class has been satisfactorily tested. The main intuitive
justification for the anti composition axiom is the following. A method operates in the scope of
the data and other methods of its object. That is, all the methods share the data of the class.
Therefore, it is necessary to test a method in the context of these. Moreover, objects can have
significant number of states. The behavior of a method can be different based on the state of the
corresponding object. Therefore, it is not enough to test all the methods and check whether they
can be integrated satisfactorily. An object is the basic unit of testing of object-oriented programs.
Thus, in an object oriented program, unit testing would mean testing each object in isolation.
During integration testing (called cluster testing in the object-oriented testing literature) various
unit tested objects are integrated and tested.
Do Various Ob ject-orientation Features Make Testing Easy?
In this section, we discuss the implications of different object-orientation features in testing.
Encapsulation:
encapsulation feature helps in data abstraction, error isolation, and error prevention. However, as
far as testing is concerned, encapsulation is not an obstacle to testing, but leads to difficulty
during debugging. Encapsulation prevents the tester from accessing the data internal to an object.
Of course, it is possible that one can require classes to support state reporting methods to print
out all the data internal to an object. Thus, the encapsulation feature though makes testing
difficult; the difficulty can be overcome to some extent through use of appropriate state reporting
methods.
Inheritance:
The inheritance feature helps in code reuse and was expected to simplify testing. It was expected
that if a class is tested thoroughly, then the classes that are derived from this class would need
only incremental testing of the added features. However, this is not the case. Even if the base
class has been thoroughly tested, the methods inherited from the base class need to be tested
again in the derived class.
Dynamic binding:
Dynamic binding was introduced to make the code compact, elegant, and easily extensible.
However, as far as testing is concerned all possible bindings of a method call have to be
identified and tested. This is not easy since the bindings take place at run-time.
Object states:
In contrast to the procedures in a procedural program, objects store data permanently. As a result,
objects do have significant states. The behavior of an object is usually different in different
states. That is, some methods may not be active in some of its states. Also, a method may act
differently in different states. Objects are the basic unit of testing for object-oriented programs.
Besides this, there are many other significant differences as well between testing procedural and
object-oriented programs. For example, statement coverage-based testing which is popular for
testing procedural programs is not meaningful for object-oriented programs.
 The reason is that inherited methods have to be retested in the derived class. In fact, the
different object- oriented features (inheritance, polymorphism, dynamic binding, state-
based behavior, etc.)
 The various object-orientation features are explicit in the design models, and it is usually
difficult to extract from and analysis of the source code. As a result, the design model is a
valuable artifact for testing object-oriented programs.
 Test cases are designed based on the design model. Therefore, this approach is
considered to be intermediate between a fully white-box and a fully black-box approach,
and is called a grey-box approach. Please note that grey-box testing is considered
important for object-oriented programs.
Grey-Box Testing of Object-oriented Programs
For object-oriented programs, several types of test cases can be designed based on the design
models of object-oriented programs. These are called the grey-box test cases. The following are
some important types of grey-box testing that can be carried on based on UML models:
State-model-based testing State coverage:
Each method of an object are tested at each state of the object.
State transition coverage: It is tested whether all transitions depicted in the state model work
satisfactorily.
State transition path coverage: All transition paths in the state model are tested. Use case-
based testing Scenario coverage: Each use case typically consists of a mainline scenario and
several alternate scenarios. For each use case, the mainline and all alternate sequences are tested
to check if any errors show up.
Class diagram-based testing derived classes: All derived classes of the base class have to be
instantiated and tested. In addition to testing the new methods defined in the derives. class, the
inherited methods must be retested.
Association testing: All association relations are tested.
Aggregation testing: Various aggregate objects are created and tested.
Sequence diagram-based testing Method coverage: All methods depicted in the sequence
diagrams are covered.
Message path coverage: All message paths that can be constructed from the sequence diagrams
are covered.
Integration Testing of Object-oriented Programs
There are two main approaches to integration testing of object-oriented programs:
• Thread-based
• Use based
Thread-based approach: In this approach, all classes that need to collaborate to realize the
behavior of a single use case are integrated and tested. After all the required classes for a use
case are integrated and tested, another use case is taken up and other classes (if any) necessary
for execution of the second use case to run are integrated and tested. This is continued till all use
cases have been considered.
Use-based approach:
Use-based integration begins by testing classes that either needs no service from other classes or
need services from at most a few other classes. After these classes have been integrated and
tested, classes that use the services from the already integrated classes are integrated and tested.
This is continued till all the classes have been integrated and tested.

SYSTEM TESTING
System tests are designed to validate a fully developed system to assure that it meets its
requirements. The test cases are therefore designed solely based on the SRS document. There are
essentially three main kinds of system testing depending on who carries out testing:
1. Alpha Testing: Alpha testing refers to the system testing carried out by the test team within
the developing organization.
2. Beta Testing: Beta testing is the system testing performed by a select group of friendly
customers.
3. Acceptance Testing: Acceptance testing is the system testing performed by the customer to
determine whether to accept the delivery of the system. The system test cases can be classified
into functionality and performance test cases. Before a fully integrated system is accepted for
system testing, smoke testing is performed.
 Smoke testing is done to check whether at least the main functionalities of the software
are working properly. Unless the software is stable and at least the main functionalities
are working satisfactorily, System testing is not undertaken.
 Smoke testing is carried out before initiating system testing to ensure that system testing
would be meaningful, or whether many parts of the software would fail.
 The idea behind smoke testing is that if the integrated program cannot pass even the basic
tests, it is not ready for a vigorous testing.
 For smoke testing, a few test cases are designed to check whether the basic functionalities
are working.
 For example, for a library automation system, the smoke tests may check whether books
can be created and deleted, whether member records can be created and deleted, and
whether books can be loaned and returned.

Performance Testing
Performance testing is an important type of system testing. Performance testing is carried out to
check whether the system meets the nonfunctional requirements identified in the SRS document.
There are several types of performance testing corresponding to various types of non-functional
requirements. For a specific system, the types of performance testing to be carried out on a
system depend on the different non-functional requirements of the system documented in its SRS
document. All performance tests can be considered as black-box tests.
Stress testing
Stress testing is also known as endurance testing. Stress testing evaluates system performance
when it is stressed for short periods of time. Stress tests are black-box tests which are designed to
impose a range of abnormal and even illegal input conditions so as to stress the capabilities of the
software. Input data volume, input data rate, processing time, utilization of memory, etc., are
tested beyond the designed capacity.
Volume testing
Volume testing checks whether the data structures (buffers, arrays, queues, stacks, etc.) have
been designed to successfully handle extraordinary situations. For example, the volume testing
for a compiler might be to check whether the symbol table overflows when a very large program
is compiled.
Configuration testing
Configuration testing is used Configuration testing is used to test system behavior in various
hardware and software configurations specified in the requirements. Sometimes systems are built
to work in different configurations for different users. For instance, a minimal system might be
required to serve a single user, and other extended configurations may be required to serve
additional users during configuration testing. The system is configured in each of the required
configurations and depending on the specific customer requirements, it is checked if the system
behaves correctly in all required configurations.
Compatibility testing
This type of testing is required when the system interfaces with external systems (e.g., databases,
servers, etc.). Compatibility aims to check whether the interfaces with the external systems are
performing as required. For instance, if the system needs to communicate with a large database
system to retrieve information, compatibility testing is required to test the speed and accuracy of
data retrieval.
Regression testing
This type of testing is required when software is maintained to fix some bugs or enhance
functionality, performance, etc.
Recovery testing
Recovery testing tests the response of the system to the presence of faults, or loss of power,
devices, services, data, etc. The system is subjected to the loss of the mentioned resources (as
discussed in the SRS document) and it is checked if the system recovers satisfactorily. For
example, the printer can be disconnected to check if the system hangs. Or, the power may be shut
down to check the extent of data loss and corruption.
Maintenance testing
This addresses testing the diagnostic programs, and other procedures that are required to help
maintenance of the system. It is verified that the artifacts exist and they perform properly.
Documentation testing
It is checked whether the required user manual, maintenance manuals, and technical manuals
exist and are consistent. If the requirements specify the types of audience for which a specific
manual should be designed, then the manual is checked for compliance of this requirement.
Usability testing
Usability testing concerns checking the user interface to see if it meets all user requirements
concerning the user interface. During usability testing, the display screens, messages, report
formats, and other aspects relating to the user interface requirements are tested. A GUI being just
being functionally correct is not enough.
Security testing Security testing is essential for software that handle or process confidential data
that is to be guarded against pilfering. It needs to be tested whether the system is fool-proof from
security attacks such as intrusion by hackers.
Error Seeding
 Sometimes customers specify the maximum number of residual errors that can be present
in the delivered software. These requirements are often expressed in terms of maximum
number of allowable errors per line of source code.
 The error seeding technique can be used to estimate the number of residual errors in
software. Error seeding, as the name implies, it involves seeding the code with some
known errors.
 In other words, some artificial errors are introduced (seeded) into the program. The
number of these seeded errors that are detected in the course of standard testing is
determined.
 The number of errors remaining in the product. The effectiveness of the testing strategy.
Let N be the total number of defects in the system, and let n of these defects be found by
testing. Let S be the total number of seeded defects, and let s of these defects be found
during testing.

SOME GENERAL ISSUES ASSOCIATED WITH TESTING


These are—how to document the results of testing and how to perform regression testing.
Test documentation A piece of documentation that is produced towards the end of testing is the
test summary report. This report normally covers each subsystem and represents a summary of
tests which have been applied to the subsystem and their outcome. It normally specifies the
following: What is the total number of tests that were applied to a subsystem. Out of the total
number of tests how many tests were successful. How many were unsuccessful, and the degree
to which they were unsuccessful, e.g., whether a test was an outright failure or whether some of
the expected results of the test were actually observed.
Regression testing
Regression testing spans unit, integration, and system testing. Instead, it is a separate dimension
to these three forms of testing. Regression testing is the practice of running an old test suite after
each change to the system or after each bug fix to ensure that no new bug has been introduced
due to the change or the bug fix. However, if only a few statements are changed, then the entire
test suite need not be run — only those test cases that test the functions and are likely to be
affected by the change need to be run. Whenever software is changed to either fix a bug, or
enhance or remove a feature, regression testing is carried out.

You might also like