Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Software Quality Assurance1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 50

SOFTWARE QUALITY ASSURANCE

INTRODUCTION TO SOFTWARE QUALITY ASSURANCE


Overview
This chapter provides an introduction to software quality assurance.
Software quality assurance (SQA) is the concern of every software
engineer to reduce costs and improve product time-to-market. A Software
Quality Assurance Plan is not merely another name for a test plan, though
test plans are included in an SQA plan. SQA activities are performed on
every software project. Use of metrics is an important part of developing a
strategy to improve the quality of both software processes and work
products.

Software Quality
Software quality is Conformance to explicitly state functional and
performance
standards,

requirements,
and

implicit

explicitly

characteristics

documented
that

are

development

expected

of

all

professionally developed software.

Quality the degree of excellence of something. We measurethe


excellence of software via a set of attributes.[Glass1992]
( satisfying requirements)!

Quality is absolute or at least independent of therequirements

underlying the product.


It is part of the software development to get the rightrequirements.

Software Quality Assurance


SQA seeks to maintain the quality throughout the development and
maintenance of the product by the execution of a variety of activities at
each stage which can result in early identification of problems, an almost
inevitable feature of any complex activity.

The role of SQA with respect to process is to ensure that planned


processes are appropriate and later implemented according to plan, and
that relevant measurement processes are provided to the appropriate
organization.
The SQA plan defines the means that will be used to ensure that software
developed for a specific product satisfies the users requirements and is of
the highest quality possible within project constraints.
In order to do so, it must first ensure that the quality target is clearly
defined and understood. It must consider management, development, and
maintenance plans for the software. The specific quality activities and
tasks are laid out, with their costs and resource requirements, their overall
management objectives, and their schedule in relation to those objectives
in the software engineering management, development, or maintenance
plans.
The SQA plan should be consistent with the software configuration
management plan (refer to the Software Configuration Management KA).
The SQA plan identifies documents, standards, practices, and conventions
governing the project and how they will be checked and monitored to
ensure adequacy and compliance. The SQA plan also identifies measures,
statistical techniques, procedures for problem reporting and corrective
action, resources such as tools, techniques, and methodologies, security
for physical media, training, and SQA reporting and documentation.
Moreover, the SQA plan addresses the software quality assurance
activities of any other type of activity described in the software plans,
such as procurement of supplier software to the project or commercial offthe-shelf software (COTS) installation, and service after delivery of the
software. It can also contain acceptance criteria as well as reporting and
management activities which are critical to software quality.

Software Quality IEEE View


Software quality is:

(1) The degree to which a system, component, or process meets specified


requirements.
(2) The degree to which a system, component, or process meets customer
or user needsor expectations.

What is quality?
Quality, simplistically, means that a product should meet its specification.
This is problematical for software systems:

There is a tension between customer quality requirements (efficiency,


reliability, etc.) and developer quality requirements (maintainability,

reusability, etc.).
Some quality requirements are difficult to specify in an unambiguous

way;
Software specifications are usually incomplete and often inconsistent.

Approaches to Tackle Quality

Transcendental view: quality is universally identifiable, absolute,

unique and perfect.


Product view: the quality of a product is measurable in an objective

manner.
User view: quality is fitness for use.
Manufacturing view: quality is the result ofthe right development of

the product.
Value-based view (Economic): quality is afunction of costs and
benefits.

Measuring Quality?

Reliability (number of failures over time)


Availability

Reliability:
Reliability is the probability of a component, or system, functioning
correctly over a given period of time under a given set of operating
conditions.

Availability:

The availability of a system is the probability that the system will be


functioning correctly at any given time.

Distinction between SoftwareErrors, Faults and Failures:


Software Error:Section of the code that are incorrect as a result of
grammatical, logical or other mistakes made by a system analyst, a
programmer, or another member of the software development team.
Software Fault: are software errors that causes the incorrectfunctioning
of the software during a specific application.
Software Failure: software faults become software failures onlywhen
they are activated.

SQA Activities:
1. Application of Technical Methods:

Tools to aid in the production of a high quality specificationi.e.

specification checkers and verifiers.


Tools to aid in the production of high-quality designsi.e. design

browsers, checkers, cross-references, verifiers.


Tools to analyze source code for quality.

2. Formal Technical Reviews:

Group analysis of a specification or design to discover errors.

3. Software Testing:
4. Enforcement of Standards:

Specification and design standards.


Implementation standards, e.g. portability.
Documentation standards.
Testing standards.

5. Control of Change

Formal management of changes to the software and documentation.


Changes require formal request to approving authority.
Approving authority makes decision on which changes get

implemented and when.


Programmers are not permitted to make unapproved changes to the

software
Opportunity to evaluate the impact and cost of changes before
committing resources

Evaluate effect of proposed changes on software quality

6. Measurement

Ongoing assessment of software quality


Track quality changes as system evolves
Warn management if software quality appears to be degrading

SQA Questions

Does the software adequately meet its quality factors?


Has software development been conducted according to preestablished standards?
Have technical disciplines performed their SQA roles properly?

Quality Assurance Elements

Standards ensure that standards are adopted and followed

Reviews and audits audits are reviews performed by SQA


personnel to ensure that quality guidelines are followed for all
software engineering work

Testing ensure that testing id properly planned and conducted

Error/defect collection

and analysis collects and analyses error

and defect data to better understand how errors are introduced and
can be eliminated

Changes

management

ensures

that

adequate

change

management practices have been instituted

Education takes lead in software process improvement and


educational program

Vendor management suggests specific quality practices vendor


should follow and incorporates quality mandates in vendor contracts

Security management ensures use of appropriate process and


technology to achieve desired security level

Safety responsible for assessing impact of software failure and


initiating steps to reduce risk

Risk management ensures risk management activities are properly


conducted and that contingency plans have been established

SQA Tasks

Prepare SQA plan for the project.

Participate in the development of the project's software process


description.

Review software engineering activities to verify compliance with the


defined software process.

Audit designated software work products to verify compliance with


those defined as part of the software process.

Ensure that any deviations in software or work products are


documented and handled according to a documented procedure.

Record any evidence of noncompliance and reports them to


management.

Formal SQA

Assumes that a rigorous syntax and semantics can be defined for


every programming language

Allows the use of a rigorous approach to the specification of


software requirements

Applies

mathematical

proof

of

correctness

techniques

to

demonstrate that a program conforms to its specification

Statistical Quality Assurance


1. Information about software defects is collected and categorized
2. Each defect is traced back to its cause
3. Using the Pareto principle (80% of the defects can be traced to 20% of
the causes) isolate the "vital few" defect causes
4. Move to correct the problems that caused the defects in the vital few

Six Sigma Software Engineering

Define customer requirements, deliverables, and project goals via


well-defined methods of customer communication.

Measure each existing process and its output to determine current


quality performance (e.g. compute defect metrics)

Analyze defect metrics and determine viral few causes.

For an existing process that needs improvement


Improve process by eliminating the root causes for defects

Control future work to ensure that future work does not


reintroduce causes of defects

If new processes are being developed


Design each new process to avoid root causes of defects and to
meet customer requirements
Verify that the process model will avoid defects and meet
customer requirements

ISO 9000 Quality Standards

Quality assurance systems are defined as the organizational


structure, responsibilities, procedures, processes, and resources for
implementing quality management.

ISO 9000 describes the quality elements that must be present for a
quality assurance system to be compliant with the standard, but it
does not describe how an organization should implement these
elements.

ISO 9001:2000 is the quality standard that contains 20 requirements


that must be present in an effective software quality assurance
system.

SQA Plan

Management section - describes the place of SQA in the structure of


the organization

Documentation section - describes each work product produced as


part of the software process

Standards, practices, and conventions section - lists all applicable


standards/practices applied during the software process and any
metrics to be collected as part of the software engineering work

Reviews and audits section - provides an overview of the approach


used in the reviews and audits to be conducted during the project

Test section - references the test plan and procedure document and
defines test record keeping requirements

Problem reporting and corrective action section - defines procedures


for reporting, tracking, and resolving errors or defects, identifies
organizational responsibilities for these activities

Other - tools, SQA methods, change control, record keeping,


training, and risk management

Software reliability
Software reliability is an important facet of software quality. It is defined as
"the probability of failure-free operation of a computer program in a
specified environment for a specified time.
One of reliability's distinguishing characteristics is that it is objective,
measurable, and can be estimated, whereas much of software quality is
subjective criteria. This distinction is especially important in the discipline
of Software Quality Assurance. These measured criteria are typically
called software metrics.

History
With software embedded into many devices today, software failure has
caused more than inconvenience. Software errors have even caused
human fatalities. The causes have ranged from poorly designed user
interfaces to direct programming errors. An example of a programming
error that lead to multiple deaths is discussed in earlier. This has resulted
in requirements for development of some types software. In the United
States, both the Food and Drug Administration (FDA) and Federal Aviation
Administration (FAA) have requirements for software development.

Goal of reliability
The need for a means to objectively determine software reliability comes
from the desire to apply the techniques of contemporary engineering
fields to the development of software. That desire is a result of the
common observation, by both lay-persons and specialists, that computer
software does not work the way it ought to. In other words, software is
seen to exhibit undesirable behavior, up to and including outright failure,

with consequences for the data which is processed, the machinery on


which the software runs, and by extension the people and materials which
those machines might negatively affect. The more critical the application
of the software to economic and production processes, or to life-sustaining
systems, the more important is the need to assess the software's
reliability.
Regardless of the criticality of any single software application, it is also
more and more frequently observed that software has penetrated deeply
into most every aspect of modern life through the technology we use. It is
only

expected

that

this

infiltration

will

continue,

along

with

an

accompanying dependency on the software by the systems which


maintain our society. As software becomes more and more crucial to the
operation of the systems on which we depend, the argument goes, it only
follows that the software should offer a concomitant level of dependability.
In other words, the software should behave in the way it is intended, or
even better, in the way it should.

Challenge of reliability
The circular logic of the preceding sentence is not accidentalit is meant
to illustrate a fundamental problem in the issue of measuring software
reliability, which is the difficulty of determining, in advance, exactly how
the software is intended to operate. The problem seems to stem from a
common conceptual error in the consideration of software, which is that
software in some sense takes on a role which would otherwise be filled by
a human being. This is a problem on two levels. Firstly, most modern
software performs work which a human could never perform, especially at
the high level of reliability that is often expected from software in
comparison to humans. Secondly, software is fundamentally incapable of
most of the mental capabilities of humans which separate them from
mere

mechanisms:

qualities

such

as

adaptability,

general-purpose

knowledge, a sense of conceptual and functional context, and common


sense.

Nevertheless, most software programs could safely be considered to have


a particular, even singular purpose. If the possibility can be allowed that
said purpose can be well or even completely defined, it should present a
means for at least considering objectively whether the software is, in fact,
reliable, by comparing the expected outcome to the actual outcome of
running

the

software

in

given

environment,

with

given

data.

Unfortunately, it is still not known whether it is possible to exhaustively


determine either the expected outcome or the actual outcome of the
entire set of possible environment and input data to a given program,
without which it is probably impossible to determine the program's
reliability with any certainty.
However, various attempts are in the works to attempt to rein in the
vastness of the space of software's environmental and input variables,
both for actual programs and theoretical descriptions of programs. Such
attempts to improve software reliability can be applied at different stages
of a program's development, in the case of real software. These stages
principally include: requirements, design, programming, testing, and
runtime evaluation. The study of theoretical software reliability is
predominantly concerned with the concept of correctness, a mathematical
field of computer science which is an outgrowth of language and
automata theory.

RELIABILITY IN PROGRAM DEVELOPMENT


Requirements
A program cannot be expected to work as desired if the developers of the
program do not, in fact, know the program's desired behavior in advance,
or if they cannot at least determine its desired behavior in parallel with
development, in sufficient detail. What level of detail is considered
sufficient is hotly debated. The idea of perfect detail is attractive, but may
be impractical, if not actually impossible. This is because the desired
behavior tends to change as the possible range of the behavior is
determined through actual attempts, or more accurately, failed attempts,
to achieve it.

Whether a program's desired behavior can be successfully specified in


advance is a moot point if the behavior cannot be specified at all, and this
is the focus of attempts to formalize the process of creating requirements
for new software projects. In situ with the formalization effort is an
attempt to help inform non-specialists, particularly non-programmers, who
commission software projects without sufficient knowledge of what
computer software is in fact capable. Communicating this knowledge is
made more difficult by the fact that, as hinted above, even programmers
cannot always know in advance what is actually possible for software in
advance of trying.

Design
While requirements are meant to specify what a program should do,
design is meant, at least at a high level, to specify how the program
should do it. The usefulness of design is also questioned by some, but
those who look to formalize the process of ensuring reliability often offer
good software design processes as the most significant means to
accomplish it. Software design usually involves the use of more abstract
and general means of specifying the parts of the software and what they
do. As such, it can be seen as a way to break a large program down into
many smaller programs, such that those smaller pieces together do the
work of the whole program.
The purposes of high-level design are as follows. It separates what are
considered to be problems of architecture, or overall program concept and
structure, from problems of actual coding, which solve problems of actual
data processing. It applies additional constraints to the development
process by narrowing the scope of the smaller software components, and
therebyit is hopedremoving variables which could increase the
likelihood of programming errors. It provides a program template,
including the specification of interfaces, which can be shared by different
teams of developers working on disparate parts, such that they can know
in advance how each of their contributions will interface with those of the
other teams. Finally, and perhaps most controversially, it specifies the

program independently of the implementation language or languages,


thereby removing language-specific biases and limitations which would
otherwise creep into the design, perhaps unwittingly on the part of
programmer-designers.

Programming
The history of computer programming language development can often
be best understood in the light of attempts to master the complexity of
computer

programs,

which

otherwise

becomes

more

difficult

to

understand in proportion (perhaps exponentially) to the size of the


programs. (Another way of looking at the evolution of programming
languages is simply as a way of getting the computer to do more and
more of the work, but this may be a different way of saying the same
thing). Lack of understanding of a program's overall structure and
functionality is a sure way to fail to detect errors in the program, and thus
the use of better languages should, conversely, reduce the number of
errors by enabling a better understanding.
Improvements in languages tend to provide incrementally what software
design has attempted to do in one fell swoop: consider the software at
ever greater levels of abstraction. Such inventions as statement, subroutine, file, class, template, library, component and more have allowed
the arrangement of a program's parts to be specified using abstractions
such as layers, hierarchies and modules, which provide structure at
different granularities, so that from any point of view the program's code
can be imagined to be orderly and comprehensible.
In addition, improvements in languages have enabled more exact control
over the shape and use of data elements, culminating in the abstract data
type. These data types can be specified to a very fine degree, including
how and when they are accessed, and even the state of the data before
and after it is accessed.

Software Build and Deployment

Many programming languages such as C and Java require the program


"source code" to be translated in to a form that can be executed by a
computer. This translation is done by a program called a compiler.
Additional operations may be involved to associate, bind, link or package
files together in order to create a usable runtime configuration of the
software application. The totality of the compiling and assembly process is
generically called "building" the software.
The software build is critical to software quality because if any of the
generated files are incorrect the software build is likely to fail. And, if the
incorrect version of a program is inadvertently used, then testing can lead
to false results.
Software builds are typically done in work area unrelated to the runtime
area, such as the application server. For this reason, a deployment step is
needed to physically transfer the software build products to the runtime
area. The deployment procedure may also involve technical parameters,
which, if set incorrectly, can also prevent software testing from beginning.
For example, a Java application server may have options for parent-first or
parent-last class loading. Using the incorrect parameter can cause the
application to fail to execute on the application server.
The technical activities supporting software quality including build,
deployment, change control and reporting are collectively known as
Software configuration management. A number of software tools have
arisen to help meet the challenges of configuration management including
file control tools and build control tools.

Testing
Software

testing,

when

done

correctly,

can

increase

overall

software quality of conformance by testing that the product conforms to


its requirements.[8] Testing includes, but is not limited to:
1.
2.
3.
4.
5.

Unit Testing
Functional Testing
Regression Testing
Performance Testing
Failover Testing

6. Usability Testing
A number of agile methodologies use testing early in the development
cycle to ensure quality in their products. For example, the test-driven
development practice, where tests are written before the code they will
test, is used in Extreme Programming to ensure quality.

Runtime
Runtime reliability determinations are similar to tests, but go beyond
simple confirmation of behavior to the evaluation of qualities such as
performance and interoperability with other code or particular hardware
configurations.

SOFTWARE QUALITY FACTORS:


A software quality factor is a non-functional requirement for a software
program which is not called up by the customer's contract, but
nevertheless is a desirable requirement which enhances the quality of the
software program. Note that none of these factors are binary; that is, they
are not either you have it or you dont traits. Rather, they are
characteristics that one seeks to maximize in ones software to optimize
its quality. So rather than asking whether a software product has
factor x, ask instead the degree to which it does (or does not).
Some software quality factors are listed here:

Understandability
Clarity of purpose. This goes further than just a statement of purpose; all
of the design and user documentation must be clearly written so that it is
easily understandable. This is obviously subjective in that the user context
must be taken into account: for instance, if the software product is to be

used by software engineers it is not required to be understandable to the


layman.

Completeness
Presence of all constituent parts, with each part fully developed. This
means that if the code calls a subroutine from an external library, the
software package must provide reference to that library and all required
parameters must be passed. All required input data must also be
available.

Conciseness
Minimization of excessive or redundant information or processing. This is
important where memory capacity is limited, and it is generally
considered good practice to keep lines of code to a minimum. It can be
improved by replacing repeated functionality by one subroutine or
function which achieves that functionality. It also applies to documents.

Portability
Ability to be run well and easily on multiple computer configurations.
Portability can mean both between different hardwaresuch as running
on a PC as well as a smartphoneand between different operating
systemssuch as running on both Mac OS X and GNU/Linux.

Consistency
Uniformity in notation, symbology, appearance, and terminology within
itself.

Maintainability
Propensity to facilitate updates to satisfy new requirements. Thus the
software product that is maintainable should be well-documented, should
not be complex, and should have spare capacity for memory, storage and
processor utilization and other resources.

Testability

Disposition to support acceptance criteria and evaluation of performance.


Such a characteristic must be built-in during the design phase if the
product is to be easily testable; a complex design leads to poor testability.

Usability
Convenience and practicality of use. This is affected by such things as the
human-computer interface. The component of the software that has most
impact on this is the user interface (UI), which for best usability is usually
graphical (i.e. a GUI).

Reliability
Ability to be expected to perform its intended functions satisfactorily. This
implies a time factor in that a reliable product is expected to perform
correctly over a period of time. It also encompasses environmental
considerations in that the product is required to perform correctly in
whatever conditions it finds itself (sometimes termed robustness).

Efficiency
Fulfillment of purpose without waste of resources, such as memory, space
and processor utilization, network bandwidth, time, etc.

Security
Ability to protect data against unauthorized access and to withstand
malicious or inadvertent interference with its operations. Besides the
presence of appropriate security mechanisms such as authentication,
access control and encryption, security also implies resilience in the face
of malicious, intelligent and adaptive attackers.

Measurement of software quality factors


There are varied perspectives within the field on measurement. There are
a great many measures that are valued by some professionalsor in
some contexts, that are decried as harmful by others. Some believe that
quantitative measures of software quality are essential. Others believe

that contexts where quantitative measures are useful are quite rare, and
so prefer qualitative measures. Several leaders in the field of software
testing have written about the difficulty of measuring what we truly want
to measure well.
One example of a popular metric is the number of faults encountered in
the software. Software that contains few faults is considered by some to
have higher quality than software that contains many faults. Questions
that can help determine the usefulness of this metric in a particular
context include:
1. What constitutes many faults? Does this differ depending upon
the purpose of the software (e.g., blogging software vs. navigational
software)? Does this take into account the size and complexity of
the software?
2. Does this account for the importance of the bugs (and the
importance to the stakeholders of the people those bugs bug)? Does
one try to weight this metric by the severity of the fault, or the
incidence of users it affects? If so, how? And if not, how does one
know that 100 faults discovered is better than 1000?
3. If the count of faults being discovered is shrinking, how do I know
what that means? For example, does that mean that the product is
now higher quality than it was before? Or that this is a smaller/less
ambitious change than before? Or that fewer tester-hours have gone
into the project than before? Or that this project was tested by less
skilled testers than before? Or that the team has discovered that
fewer faults reported is in their interest?
This last question points to an especially difficult one to manage. All
software quality metrics are in some sense measures of human behavior,
since humans create software. If a team discovers that they will benefit
from a drop in the number of reported bugs, there is a strong tendency for
the team to start reporting fewer defects. That may mean that email
begins to circumvent the bug tracking system, or that four or five bugs
get lumped into one bug report, or that testers learn not to report minor

annoyances. The difficulty is measuring what we mean to measure,


without creating incentives for software programmers and testers to
consciously or unconsciously game the measurements.
Software quality factors cannot be measured because of their vague
definitions. It is necessary to find measurements, or metrics, which can be
used to quantify them as non-functional requirements. For example,
reliability is a software quality factor, but cannot be evaluated in its own
right. However, there are related attributes to reliability, which can indeed
be measured. Some such attributes are mean time to failure, rate of
failure occurrence, and availability of the system. Similarly, an attribute of
portability is the number of target-dependent statements in a program.
A scheme that could be used for evaluating software quality factors is
given below. For every characteristic, there are a set of questions which
are relevant to that characteristic. Some type of scoring formula could be
developed based on the answers to these questions, from which a
measurement of the characteristic can be obtained.

INSPECTION AND REVIEWS:


INSPECTIONS
Inspections are more formal than walkthroughs. They consists of five
steps:

An overview is given by a member of the team that created the

document. At theend, thedocument is distributed.


In the preparation step, each participant studies the document,
consider lists of faults discovered in similar previous inspections,

and takes notes.


Then, the actual inspection meeting begins, with a team member
walking through the document (again, the goal is to uncover faults,
not to fix them). Within a day after the meeting,the moderator

produces a written report.


The rework is performed by the team that created the document, to
take into account all theitems in the report.

Finally, in the follow-up, the moderator ensures that every item


raised at the meeting hasbeen resolved. In addition, fixes must be
checked to ensure that no new faults have beenintroduced. If more
than 5% of the document has been reworked, a new full inspection
ofthe entire document is called for. Note: it must be decided in
advance how to measure thisfraction.

Inspections teams should have four members. For example, for a design
inspection, there must bea moderator, a designer, a coder, and a tester.
The moderator is the leader of the team. The rolesof reader and recorder
would be played by the designer and the coder or the tester in this case.
Walkthrough and, even more so, inspections, have a good chance of
finding faults early in thedevelopment. However, for large projects, they
require that the work products be small enough(reviews should not last
many hours) and, unless good documentation is available at all steps,
theyare not effective at discovering problems.

Inspection's Role in Software Quality Assurance


Despite more than 30 years' effort to improve software quality, companies
still release programs containing numerous errors. Many major products
have thousands of bugs. It's not for lack of trying; all major software
developers stress software quality assurance and try to remove bugs
before release. The problem is the code's complexity. It's easy to review
code but fail to notice significant errors.
Researchers have responded to these problems by studying methods of
formal correctness verification for programs. In theory, we now know how
to prove programs correct with the same degree of rigor that we apply to
mathematical theorems. In reality, this is rarely practical and even more
rarely done. Most research papers on verification make simplifying
assumptions (for example, a 1:1 correspondence between variables and
variable names) that aren't valid for real programs. Proofs of realistic
programs involve long, complex expressions and require patience, time,
and diligence that developers don't think they have. (Interestingly
enough, they never have time to verify a program before release, but they
must take time to respond to complaints after release.) Inspection

methods can be more effective than informal reviews and require less
effort than formal proofs, but their success depends on having a sound,
systematic procedure. Tools that support this procedure are also
important.
The Workshop on Inspection in Software Engineering (WISE), a satellite
event of the 2001 Computer Aided Verification Conference (CAV 01),
brought together researchers, practitioners, and regulators in the hope of
finding new, more effective software inspection approaches. Submissions
described how practitioners and researchers were performing inspections,
discussed inspections' relevance, provided evidence of how refinement of
the inspection process and computer-aided tool support can improve
inspections, and explained how careful software design could make
inspections more effective. The best ideas from the workshop have been
distilled into pairs of articles appearing in linked special issues of IEEE
Software and IEEE Transactions on Software Engineering.

Why Software Inspections?

Many quality characteristics e.g. understandability, changeability,


informationalvalue of identifiers and comments are testable only

manually
Undetected faults from the definition and design phase later cause

highconsequential costs
As inspections are executed in a team, the knowledge base is

enhanced
Implements the principle of external quality control
Delivery of high-quality results to the subsequent

development phase(milestone)
Responsibility for the quality is assigned to the whole team
Manual testing of products is a useful complement of tool supported

tests
The compliance to standards is permanently monitored
Critical product components are detected early
Every successful inspection is a milestone in the project
Every member of the inspection team becomes acquainted with the

work methodsof his colleagues


As several persons inspect the products, the authors try to use an
understandablestyle

software

Different products of the same author contain fewer defects from

inspection toinspection
It turned out that functioning inspections are a very efficient means
for qualityassurance

What are inspections?


The inspections are a means of verifying intellectual products by manually
examining the developing product, a piece at a time, by small groups of
peers to ensure that it is correct and conforms to product specifications
and requirements [STRAUSS]. The purpose of inspections is the detection
of defects. There are two aspects of inspections to address. One is the
inspections occur in the early stages in software life cycle and examine a
piece of the developing product at a time. These stages include
requirements, design, and coding. The defects in the first stage would be
amplified to more defects in design stage and much more defects in
coding stage without inspections. Thus earlier detection for defects can
make lower cost on software development and well ensure the quality of
the software product to delivery. On the other hand, a small group of
peers concentrating on one part of the product, can detect more defects
than the same number of people working alone. Therefore, this improved
effectiveness comes from the thoroughness of the inspection procedures
and the synergy achieved by an inspection team.

Why use inspections


Any efforts are expected without waste to be turned into contribution to
increase quality, productivity, and customer satisfaction. However,
basically it is the cost of doing things over, possibly several times, until
the things are done correctly. In addition, the costs of lost time, lost
productivity, lost customers, and lost business are real costs because of
no return on them. The cost of quality, however, is not such a negative
cost. Experience with inspections shows that time added to the
development cycle to accommodate the inspection process is more than
gained back in the testing and manufacturing cycles, and in the cost of
redevelopment that doesn't need to be done.

What are the differences among inspections,


walkthroughs and reviews?
In the methods of quality control, inspection is a mechanism that has
proven extremely effective for the specific objective of product verification
in many development activities. It is a structured method of quality
control, as it must follow a specified series of steps that define what can
be inspected, when it can be inspected, who can inspect it, what
preparation is needed for the inspection, how the inspection is to be
conducted, what data is to be collected, and what the follow-up to be the
inspection is. Thus the result of inspections on a project has the
performance of close procedural control and repeatability. However,
reviews and walkthroughs have less structured procedures. They can have
many purposes and formats. Reviews can be used to form decisions and
resolve issues of design and development. They can also be used as a
forum for information swapping or brainstorming. Walkthroughs are used
for the resolution of design or implementation issues. Both methods can
range from being formalized and following a predefined set of procedures
to completely informal. Thus they lacks the close procedural control and
repeatability.

INSPECTION PROCESS
Inspection Process Overview
The inspection process consists of seven primary steps: Planning,
Overview, Preparation, Inspection, Discussion, Rework, and Follow-up. Two
of these steps (Overview and Discussion) are optional based on the needs
of the project. This chapter describes each of these steps.

Inspection Steps and Deliverables


Each steps in the inspection process should have specified deliverables. (If
there aren't deliverables, the step should be eliminated.) The following
table outlines recommended deliverables for each part of the inspection
process. Also indicated are the roles responsible for each deliverable.

Inspection
Step
Planning

Overview
Preparation
Inspection
Discussion
Rework
Follow-up

Deliverable(s)
Participant List
Materials for Inspection
Agenda
Entrance Criteria
Common Understanding of
Project
Preparation Logs
Defect Log
Inspection Report
Suggested Defect Fixes
Corrected Defects
Inspection Report (amended)

Responsible Role
Moderator
Author
Moderator
Moderator
Moderator
Each Inspector
Recorder
Moderator
Various
Author
Moderator

PLANNING AN INSPECTION
Planning for a software review or inspection consists of three key parts:
Selecting the participants, developing the agenda, and distributing
necessary materials, and determining entrance criteria.

1. Selecting Participants
Selection of participants for an inspection can involve good political and
negotiation skills. The main purpose of the inspection is to improve
software quality and reduce defects. Rule of thumb #1: If a participant
does not have the qualifications to contribute to the inspection, they
should not be included.
Another complexity to selecting participants is how many people should
be included. The number is somewhat determined by assigning specific
roles. Section 5.0 discusses roles and how to assign them. A second
consideration on team size has to do with communication. The larger the
group, the more lines of communication must be maintained. If the group
grows beyond 6-10 people, the time spent on communication, scheduling,
and maintaining focus will detract from the quality and timeliness of the
inspection. Rule of thumb #2: The optimum team size for inspections is
less than 10.
A common question is should managers be included? Managers should be
aware of the outcome of inspections, but generally not included. Referring
to rule of thumb #1, managers should only participate if they will directly

add value to the substance of the inspection. Inspections and reviews are
meant to improve software quality, not to access or manage people. If the
latter becomes the goal of the process, the inspections will be threatening
and participants will not be completely open.

2. Developing the Agenda


The agenda for an inspection should be created in advance and
distributed to all participants. A standard template for each type of
inspection

should

be

developed

based

on

the

organizations

implementation of the inspection process. The inspection meeting should


be managed closely to follow the agenda. If meetings frequently go off
track and run late, participation decline.

3. Distributing Materials
Prior to an inspection, all materials that will be used in the meeting should
be distributed. The lead time depends upon the size of the items to be
reviewed. In general, there should be time for each participant to
thoroughly review the component they are responsible for inspecting. The
author should provide the materials to the moderator. The moderator will
verify and distribute them.
The method of distribution for materials is dependent on the culture of the
development team and the type of inspection to be conducted. Many
participants may prefer hard copy for design elements, while others prefer
to navigate through electronic documents. Online access to code for
inspection may be preferred for access to search functions.
Inspection participants should be expected to bring any materials they
need for the review to the meeting. There should not be new material at
the meeting, if materials were incomplete, the review should be
rescheduled.

4. Entrance Criteria
Specific criteria should be required to qualify a product or artifact for an
inspection [COLL]. Typical entrance criteria would be that the product is
complete and would be delivered to the customer if the inspection finds
no defects.

PREPARING FOR THE INSPECTION


1. Inspecting the Product or Artifacts
The primary activity in preparation for a software inspection is the
thorough inspection of the product or artifacts in question. This should be
conducted prior to the inspection meeting and at a line-by-line or item-byitem level of detail. Inspection should consider standards, best practices,
and regulations or policies.

2. Checklists
Inspections

cover

wide

variety

of

project

artifacts

including

requirements, design, code, and test plans. For each different type of
inspection it is useful for an organization to create standard checklists for
use by the inspectors [NASA1]. The checklists should be "living"
documents so an organization can improve on inspection practices and
develop core inspection knowledge.

3. Preparation Logs
During preparation, inspectors should log all defects found and track the
time spent on preparation. The use of a Preparation Log by each inspector
is a best practice. The preparation log should be given to the moderator
before the inspection meeting. The moderator can determine based on all
the logs whether the inspection team is adequately prepared for the
event.

CONDUCTING THE INSPECTION


The software inspection is conducted by presenting the material,
focusing on inspection of the product and completing two
deliverables, the Defect Log and the Inspection Report.

1. Presenting the Material


The reader presents the inspection material in a speed and
manner such that all participants can understand, keep up, and
contribute. An inspection should not last more than a few hours
due to the difficulty in maintaining attention span and comfort
sitting. If an inspection covers more material than can be covered
in 2 hours it should be broken up into multiple sessions or
multiple inspections.

2. Focus on Inspecting the Product


The moderator must maintain focus of the meeting on the inspection.
Discussion should only point out defects to the author or ask for
clarification. Any discussion about how to correct defects should be
deferred to another meeting. By maintaining focus on the actual product
being inspected, feelings of confrontation can be avoided. Specifically,
defects logged are found in the product, not in people.

3.Defect Log
An accurate log of all defects must be kept. The log should indicate what
the defect is, they defect type, where it was found, and the severity of the
defect (major, minor). The defect log will be used by the author to
prioritize and correct defects. It may also be used for metrics in a process
improvement effort. The Defect Log should not be used in evaluating
individuals. Its purpose is to improve software quality through honest and
open feedback, any other use could lead to problems.
The defect log may be sufficient for many projects. However, defects may
be somewhat complex and further information may need to be provided. A
defect report form can be used in conjunction with the defect log to
provide more information. In such circumstances, the defect log may
become the index to individual defect reports.

4. Inspection Report

At the conclusion of the inspection, the moderator should compile an


inspection report. It should note the quantity of major and minor defects
and indicate whether the project may proceed or needs rework. In the
event there are defects in need of correction, an estimate of time and
effort should be provided. The Inspection Report should be used to
communicate to project management the results of the inspection.
If rework is required, the Inspection Report will later be amended after
completion of rework.

ROLES IN INSPECTION PROCESS


The inspection process is performed by inspection team. It is repeated
many

times

on

many

work

products

during

the

given

product

development. How well the inspection teams do their job will decide
whether there is a net decrease in the overall development schedule and
an increase in net productivity, To carry out the inspection, there are
always five specific procedural roles are assigned.

1. Roles Responsibilities
Moderator
Reader
Recorder
Author
Inspector

2. Guidelines for roles


To ensure the quality, efficiency and effectiveness of inspection teams, it
is very important to carefully manage and use well-formed inspection
teams.

Inspection

teams need to combine several factors.


All team members are inspectors. Readers and recorders should be
experienced inspectors. The number of inexperienced inspectors should
be

limited

if

possible.

Minimum is three (a moderator/ recorder, a reader, and an author).


Enough team members can adequately verify the work product for the
intended purpose of the inspection, but any more persons will reduce the
effectiveness

of

the

process.

So inspection team should be small with Maximum of seven persons.

3. Participation of inspectors
I. Planning
Roles: - Moderator
- Author

II. Overview
Roles: - Moderator
- Author
- Inspectors

III. Preparation
Roles: - All inspectors

IV. Inspection Meeting


Roles: - Moderator
- Author
- Reader
- Recorder
- Inspectors

V. Discussion

Roles: - All inspectors

3.3.6 Rework
Roles: - Author

3.3.7 Followup

Roles: - Moderator
- Author

REVIEWS

Review here refers to methods which are no formal inspection,


partially review is used in the literature as a generic term for all

manual test methods (formal inspection included)


Often not only focused on the efficient detection of faults, but also
as a means for

decision making
solving of conflicts (e.g. concerning design decisions)
exchange of information
brainstorming
Normally no formal procedure existsfor the execution and the choice

of the participants as well astheir roles


Often no record and analysis of review data
Often no quantitative objectives

Formal Technical Reviews (FTR)


Objectives of FTR:
- To uncover errors in function, logic, or implementation
- To verify the software under review meets its requirements
- To ensure that the software has been represented according to
predefined standards
- To develop software in a uniform manner
- To make projects more manageable
Purposes of FTR:
- Serves as a training ground for junior engineers
- Promote backup and continuity
Review meetings constraints:
- 3-5 people involved in a review
- advanced preparation (no more than 2 hours for each person)
- The duration of the review meeting should be less than 2 hours
- focus on a specific part of a software product
People involved in a review meeting:
- Producer, review leader, 2 or 3 reviewers (one of them is recorder)

Formal Technical Review Meeting


The preparation of a review meeting:
A meeting agenda and schedule (by review leader)
Review material and distribution (by the producer)
Review in advance (by reviewers)
Review meeting results:
A review issues list
A simple review summary report (called meeting minutes)
Meeting decisions:
accept the work product w/o further modification

reject the work product due to errors


accept the work under conditions (such as change and review)
Sign-off sheet
Review summary report (a project historical record) answers the
following questions:
What was reviewed?
Who reviewed it?
What were the findings and conclusions?
Review issues list serves two purposes:
To identify problem areas in the project
To serve as an action item checklist (a follow-up procedure is
needed)

Review Guidelines (for FTR)


A minimum set of guidelines for FTR:

Review the product, not the producer


Set an agenda and maintain it
Limit debate and rebuttal
Enunciate problem areas, but dont attempt to solve every
problem noted
Take written notes
Limit the number of participants and insist upon advance
preparation
Develop a checklist for each work product that is likely to be
reviewed
Allocate resources and time schedule for FTRs
Conduct meaningful training for all reviewers
Review your early reviews

SOFTWARE TESTING
Introduction
Software Testing is the process of executing a program or system with the
intent of finding errors. Or, it involves any activity aimed at evaluating an
attribute or capability of a program or system and determining that it
meets its required results. Software is not unlike other physical processes
where inputs are received and outputs are produced. Where software
differs is in the manner in which it fails. Most physical systems fail in a

fixed (and reasonably small) set of ways. By contrast, software can fail in
many bizarre ways. Detecting all of the different failure modes for
software is generally infeasible.
Unlike most physical systems, most of the defects in software are design
errors, not manufacturing defects. Software does not suffer from
corrosion, wear-and-tear -- generally it will not change until upgrades, or
until obsolescence. So once the software is shipped, the design defects -or bugs -- will be buried in and remain latent until activation.
Software bugs will almost always exist in any software module with
moderate size: not because programmers are careless or irresponsible,
but because the complexity of software is generally intractable -- and
humans have only limited ability to manage complexity. It is also true that
for any complex systems, design defects can never be completely ruled
out.
Discovering the design defects in software, is equally difficult, for the
same reason of complexity. Because software and any digital systems are
not continuous, testing boundary values are not sufficient to guarantee
correctness. All the possible values need to be tested and verified, but
complete testing is infeasible. Exhaustively testing a simple program to
add only two integer inputs of 32-bits (yielding 2^64 distinct test cases)
would take hundreds of years, even if tests were performed at a rate of
thousands per second. Obviously, for a realistic software module, the
complexity can be far beyond the example mentioned here. If inputs from
the real world are involved, the problem will get worse, because timing
and unpredictable environmental effects and human interactions are all
possible input parameters under consideration.
A further complication has to do with the dynamic nature of programs. If a
failure occurs during preliminary testing and the code is changed, the
software may now work for a test case that it didn't work for previously.
But its behavior on pre-error test cases that it passed before can no longer

be guaranteed. To account for this possibility, testing should be restarted.


The expense of doing this is often prohibitive.
Testing is usually performed for the following purposes:

To improve quality.
As computers and software are used in critical applications, the outcome
of a bug can be severe. Bugs can cause huge losses. Bugs in critical
systems have caused airplane crashes, allowed space shuttle missions to
go awry, halted trading on the stock market, and worse. Bugs can kill.
Bugs can cause disasters. The so-called year 2000 (Y2K) bug has given
birth to a cottage industry of consultants and programming tools
dedicated to making sure the modern world doesn't come to a screeching
halt on the first day of the next century. In a computerized embedded
world, the quality and reliability of software is a matter of life and death.
Quality means the conformance to the specified design requirement.
Being correct, the minimum requirement of quality, means performing as
required under specified circumstances. Debugging, a narrow view of
software testing, is performed heavily to find out design defects by the
programmer. The imperfection of human nature makes it almost
impossible to make a moderately complex program correct the first time.
Finding the problems and get them fixed, is the purpose of debugging in
programming phase.

For Verification & Validation (V&V)


Another important purpose of testing is verification and validation (V&V).
Testing can serve as metrics. It is heavily used as a tool in the V&V
process. Testers can make claims based on interpretations of the testing
results, which either the product works under certain situations, or it does
not work. We can also compare the quality among different products
under the same specification, based on results from the same test.

We cannot test quality directly, but we can test related factors to make
quality

visible.

Quality

has

three

sets

of

factors

-- functionality,

engineering, and adaptability. These three sets of factors can be thought


of as dimensions in the software quality space. Each dimension may be
broken

down

into

its

component

factors

and

considerations

at

successively lower levels of detail. Table 1 illustrates some of the most


frequently cited quality considerations.
Functionality
(exterior quality)
Correctness
Reliability
Usability
Integrity

Engineering

Adaptability (future

(interior quality)
Efficiency
Testability
Documentation
Structure

quality)
Flexibility
Reusability
Maintainability

Table 1. Typical Software Quality Factors

Good testing provides measures for all relevant factors. The importance of
any particular factor varies from application to application. Any system
where

human

lives

are

at

stake

must

place

extreme

emphasis

on reliability and integrity. In the typical business system usability and


maintainability are the key factors, while for a one-time scientific program
neither may be significant. Our testing, to be fully effective, must be
geared to measuring each relevant factor and thus forcing quality to
become tangible and visible.
Tests with the purpose of validating the product works are named clean
tests, or positive tests. The drawbacks are that it can only validate that
the software works for the specified test cases. A finite number of tests
cannot validate that the software works for all situations. On the contrary,
only one failed test is sufficient enough to show that the software does not
work. Dirty tests, or negative tests, refers to the tests aiming at breaking
the software, or showing that it does not work. A piece of software must

have sufficient exception handling capabilities to survive a significant


level of dirty tests.
A testable design is a design that can be easily validated, falsified and
maintained. Because testing is a rigorous effort and requires significant
time and cost, design for testability is also an important design rule for
software development.

For Reliability Estimation


Software reliability has important relations with many aspects of software,
including the structure, and the amount of testing it has been subjected
to. Based on an operational profile (an estimate of the relative frequency
of use of various inputs to the program [Lyu95]), testing can serve as a
statistical sampling method to gain failure data for reliability estimation.
Software testing is not mature. It still remains an art, because we still
cannot make it a science. We are still using the same testing techniques
invented 20-30 years ago, some of which are crafted methods or
heuristics rather than good engineering methods. Software testing can be
costly, but not testing software is even more expensive, especially in
places that human lives are at stake. Solving the software-testing problem
is no easier than solving the Turing halting problem. We can never be sure
that a piece of software is correct. We can never be sure that the
specifications are correct. No verification system can verify every correct
program. We can never be certain that a verification system is correct
either.

Performance testing
Not all software systems have specifications on performance explicitly. But
every system will have implicit performance requirements. The software
should not take infinite time or infinite resource to execute. "Performance
bugs" sometimes are used to refer to those design problems in software
that cause the system performance to degrade.

Performance has always been a great concern and a driving force of


computer evolution. Performance evaluation of a software system usually
includes: resource usage, throughput, stimulus-response time and queue
lengths detailing the average or maximum number of tasks waiting to be
serviced by selected resources. Typical resources that need to be
considered include network bandwidth requirements, CPU cycles, disk
space, disk access operations, and memory usage. The goal of
performance

testing

can

be

performance

bottleneck

identification,

performance comparison and evaluation, etc. The typical method of doing


performance testing is using a benchmark -- a program, workload or trace
designed to be representative of the typical system usage.

Reliability testing
Software reliability refers to the probability of failure-free operation of a
system. It is related to many aspects of software, including the testing
process. Directly estimating software reliability by quantifying its related
factors can be difficult. Testing is an effective sampling method to
measure software reliability. Guided by the operational profile, software
testing (usually black-box testing) can be used to obtain failure data, and
an estimation model can be further used to analyze the data to estimate
the present reliability and predict future reliability. Therefore, based on the
estimation, the developers can decide whether to release the software,
and the users can decide whether to adopt and use the software. Risk of
using

software

can

also

be

assessed

based

on

reliability

information. Advocates that the primary goal of testing should be to


measure the dependability of tested software.
There is agreement on the intuitive meaning of dependable software: it
does not fail in unexpected or catastrophic ways. Robustness testing and
stress testing are variances of reliability testing based on this simple
criterion.
The robustness of a software component is the degree to which it can
function correctly in the presence of exceptional inputs or stressful

environmental conditions.Robustness testing differs with correctness


testing in the sense that the functional correctness of the software is not
of concern. It only watches for robustness problems such as machine
crashes, process hangs or abnormal termination. The oracle is relatively
simple, therefore robustness testing can be made more portable and
scalable than correctness testing.
Stress testing, or load testing, is often used to test the whole system
rather than the software alone. In such tests the software or system are
exercised with or beyond the specified limits. Typical stress includes
resource exhaustion, bursts of activities, and sustained high loads.

Security testing
Software quality, reliability and security are tightly coupled. Flaws in
software can be exploited by intruders to open security holes. With the
development of the Internet, software security problems are becoming
even more severe.
Many critical software applications and services have integrated security
measures against malicious attacks. The purpose of security testing of
these systems include identifying and removing software flaws that may
potentially lead to security violations, and validating the effectiveness of
security measures. Simulated security attacks can be performed to find
vulnerabilities.

WHITE-BOX TESTING:
White-box testing is a verification technique software engineers can use to
examine if their code works as expected.
White-box testing is testing that takes into account the internal
mechanism of a system orcomponent (IEEE, 1990). White-box testing is
also known as structural testing, clear boxtesting, and glass box
testing.The connotations of clear box and glassbox appropriately
indicate that you have full visibility of the internal workings of thesoftware
product, specifically, the logic and the structure of the code.

While

white-box

testing

can

be

applied

at

the unit, integration and system levels of the software testing process, it is
usually done at the unit level. It can test paths within a unit, paths
between units during integration, and between subsystems during a
systemlevel test. Though this method of test design can uncover many
errors or problems, it might not detect unimplemented parts of the
specification or missing requirements.
White-box test design techniques include:

Control flow testing

Data flow testing

Branch testing

Path testing

Statement coverage

Decision coverage

Level of White box testing:


Unit testing, which is testing of individual hardware or software units or
groups ofrelated units (IEEE, 1990). A unit is a software component that
cannot be subdividedinto other components (IEEE, 1990). Software
engineers write white-box test cases toexamine whether the unit is coded
correctly. Unit testing is important for ensuring thecode is solid before it is
integrated with other code. Once the code is integrated into thecode base,
the cause of an observed failure is more difficult to find. Also, since
thesoftware engineer writes and runs unit tests him or herself, companies
often do not trackthe unit test failures that are observed making these
types of defects the most privateto the software engineer. We all prefer
to find our own mistakes and to have theopportunity to fix them without
others knowing. Approximately 65% of all bugs can be caught in unit
testing.

Integration testing, which is testing in which software components,


hardwarecomponents, or both are combined and tested to evaluate the
interaction between them (IEEE, 1990). Test cases are written which
explicitly examine the interfaces between the various units. These test
cases can be black box test cases, whereby the tester understands that a
test case requires multiple program units to interact. Alternatively, whitebox test cases are written which explicitly exercise the interfaces that are
knownto the tester.
Regression testing, which is selective retesting of a system or
component to verify that modifications have not caused unintended
effects and that the system or component still complies with its specified
requirements (IEEE, 1990). As with integration testing, regression testing
can be done via black-box test cases, white-box test cases, or a
combination of the two. White-box unit and integration test cases can be
saved and rerunas part of regression testing.

White Box Testing Steps


The white box testing process for an application block is shown in Figure
6.2.

Figure 6.2. White box testing process

White box testing involves the following steps:

1. Create test plans. Identify all white box test scenarios and
prioritize them.
2. Profile the application block. This step involves studying the
code at run time to understand the resource utilization, time spent
by various methods and operations, areas in code that are not
accessed, and so on.
3. Test the internal subroutines. This step ensures that the
subroutines or the nonpublic interfaces can handle all types of data
appropriately.
4. Test loops and conditional statements. This step focuses on
testing the loops and conditional statements for accuracy and
efficiency for different data inputs.
5. Perform security testing. White box security testing helps you
understand possible security loopholes by looking at the way the
code handles security.
The next sections describe each of these steps.

Step 1: Create Test Plans


The test plans for white box testing can be created only after a reasonably
stable build of the application block is available. The creation of test plans
involves extensive code review and input from design review and black
box testing. The test plans for white box testing include the following:

Profiling, including code coverage, resource utilization, and resource


leaks

Testing internal subroutines for integrity and consistency in data


processing

Loop testing; test simple, concatenated, nested, and unstructured


loops

Conditional statements, such as simple expressions, compound


expressions, and expressions that evaluate to Boolean.

Step 2: Profile the Application Block


Profiling allows you to monitor the behavior of a particular code path at
run time when the code is being executed. Profiling includes the following
tests:

Code coverage. Code coverage testing ensures that every line of


code is executed at least once during testing. You must develop test
cases in a way that ensures the entire execution tree is tested at
least once. To ensure that each statement is executed once, test
cases should be based on the control structure in the code and the
sequence diagrams from the design documents. The control
structures in the code consist of various conditions as follows:
Various conditional statements that branch into different code
paths. For example, a Boolean variable that evaluates to
"false" or "true" can execute different code paths. There can
be other compound conditions with multiple conditions,
Boolean operators, and bit-wise comparisons.
Various types of loops, such as simple loops, concatenated
loops, and nested loops.
There are various tools available for code coverage testing, but you
still need to execute the test cases. The tools identify the code that
has been executed during the testing. In this way, you can identify
the redundant code that never gets executed. This code may be left

over from a previous version of the functionality or may signify a


partially implemented functionality or dead code that never gets
called.
Tables 6.3 and 6.4 list sample test cases for testing the code
coverage

of Configuration

Manager class

of

the

CMAB

(Management Application Block).


Table 6.3: The CMAB Test Case Document for Testing the Code Coverage for
InitAllProvider Method and All Invoked Methods

Scenario
1.3

Test the code coverage for the method


InitAllProviders()in Configuration Manager class.

Priority

High

Execution
details

Create a sample application for reading configuration


data from a data store through the CMAB.
Run the application under the following conditions:
With a default section present
Without a default section
Trace the code coverage using an automated tool.
Report any code not being called in InitAllProviders().

Tools
required

Custom test harness integrating the application block


for reading configuration data.

Expected
results

The entire code for InitAllProviders()method and all


the invoked methods should be covered under the
preceding conditions.

Table 6.4: The CMAB Test Case Document for Testing the Code Coverage for
Read Method and All Invoked Methods

Scenario
1.4

Test the code coverage for the method Read


(sectionName) in the Configuration Manager
class.

Priority

High

Execution
details

Create a sample application for reading configuration


data from SQL database through the CMAB.
Run the application under the following conditions:
Give a null section name or a section name of zero
length to the Read method.
Read a section whose name is not mentioned in the
App.config or Web.config files.
Read a configuration section that has cache enabled.
Read a configuration section that has cache disabled.
Read a configuration section successfully with the
cache disabled, and then disconnect the database and
read the section again.
Read a configuration section with the section having no
configuration data in the database.
Read the configuration section that does not have
provider information mentioned in the App.config or
Web.config files.
Trace the code coverage.
Report any code left not being covered in
the Read (sectionName) method.

Tools
required

Custom test harness integrating the application block


for reading of configuration data.

Expected
results

The entire code for the Read (sectionName) method


and the invoked methods should be covered under the
preceding conditions.

Memory

allocation

pattern. You

can

profile

the

memory

allocation pattern of the application block by using code profiling

tools.
Time taken for executing a code path. For scenarios where
performance is critical, you can profile the time they take. Timing a
code path may require custom instrumentation of the appropriate
code. There are also various tools available that help you measure
the

time

it

takes

for

particular

scenario

to

execute

by

automatically creating instrumented assemblies of the application

block.
Profiling for excessive resource utilization. The input from a
performance test may show excessive resource utilization, such as
CPU, memory, disk I/O, or network I/O, for a particular usage
scenario. But you may need to profile the code to track the piece of
code that is blocking resources disproportionately.

Step 3: Test the Internal Subroutines


Thoroughly test all internal subroutines for every type of input. The
subroutines that are internally called by the public API to process the input
may be working as expected for the expected input types. However, after
a thorough code review, you may notice that there are some expressions
that may fail for certain types of input. This warrants the testing of
internal methods and subroutines by developing NUnit tests for internal
functions after a thorough code review. Following are some examples of
potential pitfalls:

The code analysis reveals that the function may fail for a certain
input value. For example, a function expecting numeric input may
fail for an input value of 0.

In the case of the CMAB, the function reads information from the
cache. The function returns the information appropriately if the
cache is not empty. However, if during the process of reading, the
cache is flushed or refreshed, the function may fail.

The function may be reading values in a buffer before returning


them to the client. Certain input values might result in a buffer
overflow and loss of data.

The subroutine does not handle an exception where the remote call
to a database is not successful. For example, in the CMAB, if the
function is trying the update the SQL Server information but the SQL

Server database is not available, it does not log the application in


the appropriate event sink.

Step 4: Test Loops and Conditional Statements


The application block may contain various types of loops, such as simple,
nested, concatenated, and unstructured loops. Although unstructured
loops require redesigning, the other types of loops require extensive
testing for various inputs. Loops are critical to the application block
performance because they magnify seemingly trivial problems by iterating
through the loop multiple times.
Some of the common errors could cause the loop to execute infinite times.
This could result in excessive CPU or memory utilization resulting in the
application failing. Therefore, all loops in the application block should be
tested for the following conditions:

Provide input that results in executing the loop zero times. This can
be achieved where the input to the lower bound value of the loop is
less than the upper bound value.

Provide input that results in executing the loop one time. This can be
achieved where the lower bound value and upper bound value are
the same.

Provide input that results in executing the loop a specified number


of times within a specific range.

Provide input that the loop might iterate n, n-1, and n+1 times. The
out-of-bound loops (n-1 and n+1) are very difficult to detect with a
simple code review; therefore, there is a need to execute special
test cases that can simulate such cases.

When testing nested loops, you can start by testing the innermost loop,
with all other loops set to iterate a minimum number of times. After the

innermost loop is tested, you can set it to iterate a minimum number of


times, and then test the outermost loop as if it was a simple loop.
Also, all of the conditional statements should be completely tested. The
process of conditional testing ensures that the controlling expressions
have been exercised during testing by presenting the evaluating
expression with a set of input values. The input values ensure that all
possible outcomes of the expressions are tested for expected output. The
conditional statements can be a relational expression, a simple condition,
a compound condition, or a Boolean expression.

Step 5: Perform Security Testing


White box security testing focuses on identifying test scenarios and
testing based on knowledge of implementation details. During code
reviews, you can identify areas in code that validate data, handle data,
access resources, or perform privileged operations. Test cases can be
developed to test all such areas. Following are some examples:

Validation techniques can be tested by passing negative value, null


value, and so on, to make sure the proper error message displays.

If

the

application

block

handles

sensitive

data

and

uses

cryptography, then based on knowledge from code reviews, test


cases can be developed to validate the encryption technique or
cryptography methods.

Advantages
White-box testing is one of the two biggest testing methodologies used
today. It primarily has three advantages:
1. Side effects of having the knowledge of the source code is beneficial
to thorough testing.

2. Optimization of code by revealing hidden errors and being able to


remove these possible defects.
3. Gives the programmer introspection because developers carefully
describe any new implementation
4. It helps in optimizing the code.
5. Extra lines of codes can be removed which can bring in hidden
effects.
6. Due to the testers knowledge about code, maximum coverage is
attained during test scenario writing.
7. As the tester has knowledge of the source code, it becomes very
easy to find out which type of data can help in testing the
application effectively.

Disadvantages
Although White-box testing has great advantages, it is not perfect and
contains some disadvantages. It has two disadvantages:
1. White-box testing brings complexity to testing because to be able to
test every important aspect of the program, you must have great
knowledge

of

the

program.

White-box

testing

requires

programmer with a high-level of knowledge due to the complexity


of the level of testing that needs to be done.
2. On some occasions, it is not realistic to be able to test every single
existing condition of the application and some conditions will be
untested.
3. Due to the fact that a skilled tester is needed to perform white box
testing, the costs are increased.

4. Sometimes it is impossible to look into every nook and corner to find


out hidden errors that may create problems as many paths will go
untested.
5. It is difficult to maintain white box testing as the use of specialized
tools like code analyzers and debugging tools are required.

GREY BOX TESTING


Grey Box testing is a technique to test the application with limited
knowledge of the internal workings of an application. In software testing,
the term the more you know the better carries a lot of weight when
testing an application.
Mastering the domain of a system always gives the tester an edge over
someone with limited domain knowledge. Unlike black box testing, where
the tester only tests the application's user interface, in grey box testing,
the tester has access to design documents and the database. Having this
knowledge, the tester is able to better prepare test data and test
scenarios when making the test plan.

What is Gray Box Testing?


Gray box testing is a combination of White box and Black box testing.
Gray box testing is the testing of software application using effective
combination of both White & Black box testing method. This is nice &
powerful idea to test the application.
The white box testing means tester is aware of internal structure of code
but the black box tester doesnt aware the internal structure of code.
In the Gray box testing tester is usually has knowledge of limited access
of code and based on this knowledge the test cases are designed and the
software application under test treat as a black box & tester test the

application from outside. Dont confuse with White box & Gray box, as in
the Gray box testing is tester doesnt have the knowledge in detailed. Also
the Gray box testing is not a black box testing method because the tester
knows some part of the internal structure of code. So Gray Box
Testing approach is the testing approach used when some knowledge of
internal structure but not in detailed.
The name is comes because the application for tester is like a gray box
like a transparent box and tester see inside it but not fully transparent &
can see partially in it. As tester doesnt require the access to code the
gray box testing is known as unbiased & non-intrusive.
To test the Web Services application usually the Gray box testing is used.

Gray Box Testing Example:


We will take example of web application testing. To explore gray testing
will take a simple functionality of web application. You just want to enter
email id as input in the web form & upon submitting the valid email id
user & based on users interest (fields entered) user should get some
articles over email. The validation of email is using Java Script on client
side only. In this case if tester doesnt knows the internal structure of the
implementations then you might test the web application of form with
some cases like Valid Email Id, Invalid Email ID & based on this will check
whether functionality is working or not.
But tester is aware of some internal structure & if system is making the
assumptions like

System will not get Invalid email ID

System will not send email to invalid email ID

System will not receive failure email notifications.

In this type of testing you have to test the application by disabling the
Java Script, it might be possible due to any reason Java Script is failed &
System get Invalid email to process & all assumptions made by system
will failed, as a result incorrect inputs are send to system, so

System will get Invalid email ID to process

System will send email to invalid email ID

System will receive failure email notifications.

Advantages
1. Offers combined benefits of black box and white box testing
wherever possible.
2. Grey box testers don't rely on the source code; instead they rely on
interface definition and functional specifications.
3. Based on the limited information available, a grey box tester can
design excellent test scenarios especially around communication
protocols and data type handling.
4. The test is done from the point of view of the user and not the
designer.

Disadvantages
1. Since the access to source code is not available, the ability to go
over the code and test coverage is limited.
2. The tests can be redundant if the software designer has already run
a test case.
3. Testing every possible input stream is unrealistic because it would
take an unreasonable amount of time; therefore, many program
paths will go untested.

You might also like