Software Quality Assurance1
Software Quality Assurance1
Software Quality Assurance1
Software Quality
Software quality is Conformance to explicitly state functional and
performance
standards,
requirements,
and
implicit
explicitly
characteristics
documented
that
are
development
expected
of
all
What is quality?
Quality, simplistically, means that a product should meet its specification.
This is problematical for software systems:
reusability, etc.).
Some quality requirements are difficult to specify in an unambiguous
way;
Software specifications are usually incomplete and often inconsistent.
manner.
User view: quality is fitness for use.
Manufacturing view: quality is the result ofthe right development of
the product.
Value-based view (Economic): quality is afunction of costs and
benefits.
Measuring Quality?
Reliability:
Reliability is the probability of a component, or system, functioning
correctly over a given period of time under a given set of operating
conditions.
Availability:
SQA Activities:
1. Application of Technical Methods:
3. Software Testing:
4. Enforcement of Standards:
5. Control of Change
software
Opportunity to evaluate the impact and cost of changes before
committing resources
6. Measurement
SQA Questions
Error/defect collection
and defect data to better understand how errors are introduced and
can be eliminated
Changes
management
ensures
that
adequate
change
SQA Tasks
Formal SQA
Applies
mathematical
proof
of
correctness
techniques
to
ISO 9000 describes the quality elements that must be present for a
quality assurance system to be compliant with the standard, but it
does not describe how an organization should implement these
elements.
SQA Plan
Test section - references the test plan and procedure document and
defines test record keeping requirements
Software reliability
Software reliability is an important facet of software quality. It is defined as
"the probability of failure-free operation of a computer program in a
specified environment for a specified time.
One of reliability's distinguishing characteristics is that it is objective,
measurable, and can be estimated, whereas much of software quality is
subjective criteria. This distinction is especially important in the discipline
of Software Quality Assurance. These measured criteria are typically
called software metrics.
History
With software embedded into many devices today, software failure has
caused more than inconvenience. Software errors have even caused
human fatalities. The causes have ranged from poorly designed user
interfaces to direct programming errors. An example of a programming
error that lead to multiple deaths is discussed in earlier. This has resulted
in requirements for development of some types software. In the United
States, both the Food and Drug Administration (FDA) and Federal Aviation
Administration (FAA) have requirements for software development.
Goal of reliability
The need for a means to objectively determine software reliability comes
from the desire to apply the techniques of contemporary engineering
fields to the development of software. That desire is a result of the
common observation, by both lay-persons and specialists, that computer
software does not work the way it ought to. In other words, software is
seen to exhibit undesirable behavior, up to and including outright failure,
expected
that
this
infiltration
will
continue,
along
with
an
Challenge of reliability
The circular logic of the preceding sentence is not accidentalit is meant
to illustrate a fundamental problem in the issue of measuring software
reliability, which is the difficulty of determining, in advance, exactly how
the software is intended to operate. The problem seems to stem from a
common conceptual error in the consideration of software, which is that
software in some sense takes on a role which would otherwise be filled by
a human being. This is a problem on two levels. Firstly, most modern
software performs work which a human could never perform, especially at
the high level of reliability that is often expected from software in
comparison to humans. Secondly, software is fundamentally incapable of
most of the mental capabilities of humans which separate them from
mere
mechanisms:
qualities
such
as
adaptability,
general-purpose
the
software
in
given
environment,
with
given
data.
Design
While requirements are meant to specify what a program should do,
design is meant, at least at a high level, to specify how the program
should do it. The usefulness of design is also questioned by some, but
those who look to formalize the process of ensuring reliability often offer
good software design processes as the most significant means to
accomplish it. Software design usually involves the use of more abstract
and general means of specifying the parts of the software and what they
do. As such, it can be seen as a way to break a large program down into
many smaller programs, such that those smaller pieces together do the
work of the whole program.
The purposes of high-level design are as follows. It separates what are
considered to be problems of architecture, or overall program concept and
structure, from problems of actual coding, which solve problems of actual
data processing. It applies additional constraints to the development
process by narrowing the scope of the smaller software components, and
therebyit is hopedremoving variables which could increase the
likelihood of programming errors. It provides a program template,
including the specification of interfaces, which can be shared by different
teams of developers working on disparate parts, such that they can know
in advance how each of their contributions will interface with those of the
other teams. Finally, and perhaps most controversially, it specifies the
Programming
The history of computer programming language development can often
be best understood in the light of attempts to master the complexity of
computer
programs,
which
otherwise
becomes
more
difficult
to
Testing
Software
testing,
when
done
correctly,
can
increase
overall
Unit Testing
Functional Testing
Regression Testing
Performance Testing
Failover Testing
6. Usability Testing
A number of agile methodologies use testing early in the development
cycle to ensure quality in their products. For example, the test-driven
development practice, where tests are written before the code they will
test, is used in Extreme Programming to ensure quality.
Runtime
Runtime reliability determinations are similar to tests, but go beyond
simple confirmation of behavior to the evaluation of qualities such as
performance and interoperability with other code or particular hardware
configurations.
Understandability
Clarity of purpose. This goes further than just a statement of purpose; all
of the design and user documentation must be clearly written so that it is
easily understandable. This is obviously subjective in that the user context
must be taken into account: for instance, if the software product is to be
Completeness
Presence of all constituent parts, with each part fully developed. This
means that if the code calls a subroutine from an external library, the
software package must provide reference to that library and all required
parameters must be passed. All required input data must also be
available.
Conciseness
Minimization of excessive or redundant information or processing. This is
important where memory capacity is limited, and it is generally
considered good practice to keep lines of code to a minimum. It can be
improved by replacing repeated functionality by one subroutine or
function which achieves that functionality. It also applies to documents.
Portability
Ability to be run well and easily on multiple computer configurations.
Portability can mean both between different hardwaresuch as running
on a PC as well as a smartphoneand between different operating
systemssuch as running on both Mac OS X and GNU/Linux.
Consistency
Uniformity in notation, symbology, appearance, and terminology within
itself.
Maintainability
Propensity to facilitate updates to satisfy new requirements. Thus the
software product that is maintainable should be well-documented, should
not be complex, and should have spare capacity for memory, storage and
processor utilization and other resources.
Testability
Usability
Convenience and practicality of use. This is affected by such things as the
human-computer interface. The component of the software that has most
impact on this is the user interface (UI), which for best usability is usually
graphical (i.e. a GUI).
Reliability
Ability to be expected to perform its intended functions satisfactorily. This
implies a time factor in that a reliable product is expected to perform
correctly over a period of time. It also encompasses environmental
considerations in that the product is required to perform correctly in
whatever conditions it finds itself (sometimes termed robustness).
Efficiency
Fulfillment of purpose without waste of resources, such as memory, space
and processor utilization, network bandwidth, time, etc.
Security
Ability to protect data against unauthorized access and to withstand
malicious or inadvertent interference with its operations. Besides the
presence of appropriate security mechanisms such as authentication,
access control and encryption, security also implies resilience in the face
of malicious, intelligent and adaptive attackers.
that contexts where quantitative measures are useful are quite rare, and
so prefer qualitative measures. Several leaders in the field of software
testing have written about the difficulty of measuring what we truly want
to measure well.
One example of a popular metric is the number of faults encountered in
the software. Software that contains few faults is considered by some to
have higher quality than software that contains many faults. Questions
that can help determine the usefulness of this metric in a particular
context include:
1. What constitutes many faults? Does this differ depending upon
the purpose of the software (e.g., blogging software vs. navigational
software)? Does this take into account the size and complexity of
the software?
2. Does this account for the importance of the bugs (and the
importance to the stakeholders of the people those bugs bug)? Does
one try to weight this metric by the severity of the fault, or the
incidence of users it affects? If so, how? And if not, how does one
know that 100 faults discovered is better than 1000?
3. If the count of faults being discovered is shrinking, how do I know
what that means? For example, does that mean that the product is
now higher quality than it was before? Or that this is a smaller/less
ambitious change than before? Or that fewer tester-hours have gone
into the project than before? Or that this project was tested by less
skilled testers than before? Or that the team has discovered that
fewer faults reported is in their interest?
This last question points to an especially difficult one to manage. All
software quality metrics are in some sense measures of human behavior,
since humans create software. If a team discovers that they will benefit
from a drop in the number of reported bugs, there is a strong tendency for
the team to start reporting fewer defects. That may mean that email
begins to circumvent the bug tracking system, or that four or five bugs
get lumped into one bug report, or that testers learn not to report minor
Inspections teams should have four members. For example, for a design
inspection, there must bea moderator, a designer, a coder, and a tester.
The moderator is the leader of the team. The rolesof reader and recorder
would be played by the designer and the coder or the tester in this case.
Walkthrough and, even more so, inspections, have a good chance of
finding faults early in thedevelopment. However, for large projects, they
require that the work products be small enough(reviews should not last
many hours) and, unless good documentation is available at all steps,
theyare not effective at discovering problems.
methods can be more effective than informal reviews and require less
effort than formal proofs, but their success depends on having a sound,
systematic procedure. Tools that support this procedure are also
important.
The Workshop on Inspection in Software Engineering (WISE), a satellite
event of the 2001 Computer Aided Verification Conference (CAV 01),
brought together researchers, practitioners, and regulators in the hope of
finding new, more effective software inspection approaches. Submissions
described how practitioners and researchers were performing inspections,
discussed inspections' relevance, provided evidence of how refinement of
the inspection process and computer-aided tool support can improve
inspections, and explained how careful software design could make
inspections more effective. The best ideas from the workshop have been
distilled into pairs of articles appearing in linked special issues of IEEE
Software and IEEE Transactions on Software Engineering.
manually
Undetected faults from the definition and design phase later cause
highconsequential costs
As inspections are executed in a team, the knowledge base is
enhanced
Implements the principle of external quality control
Delivery of high-quality results to the subsequent
development phase(milestone)
Responsibility for the quality is assigned to the whole team
Manual testing of products is a useful complement of tool supported
tests
The compliance to standards is permanently monitored
Critical product components are detected early
Every successful inspection is a milestone in the project
Every member of the inspection team becomes acquainted with the
software
inspection toinspection
It turned out that functioning inspections are a very efficient means
for qualityassurance
INSPECTION PROCESS
Inspection Process Overview
The inspection process consists of seven primary steps: Planning,
Overview, Preparation, Inspection, Discussion, Rework, and Follow-up. Two
of these steps (Overview and Discussion) are optional based on the needs
of the project. This chapter describes each of these steps.
Inspection
Step
Planning
Overview
Preparation
Inspection
Discussion
Rework
Follow-up
Deliverable(s)
Participant List
Materials for Inspection
Agenda
Entrance Criteria
Common Understanding of
Project
Preparation Logs
Defect Log
Inspection Report
Suggested Defect Fixes
Corrected Defects
Inspection Report (amended)
Responsible Role
Moderator
Author
Moderator
Moderator
Moderator
Each Inspector
Recorder
Moderator
Various
Author
Moderator
PLANNING AN INSPECTION
Planning for a software review or inspection consists of three key parts:
Selecting the participants, developing the agenda, and distributing
necessary materials, and determining entrance criteria.
1. Selecting Participants
Selection of participants for an inspection can involve good political and
negotiation skills. The main purpose of the inspection is to improve
software quality and reduce defects. Rule of thumb #1: If a participant
does not have the qualifications to contribute to the inspection, they
should not be included.
Another complexity to selecting participants is how many people should
be included. The number is somewhat determined by assigning specific
roles. Section 5.0 discusses roles and how to assign them. A second
consideration on team size has to do with communication. The larger the
group, the more lines of communication must be maintained. If the group
grows beyond 6-10 people, the time spent on communication, scheduling,
and maintaining focus will detract from the quality and timeliness of the
inspection. Rule of thumb #2: The optimum team size for inspections is
less than 10.
A common question is should managers be included? Managers should be
aware of the outcome of inspections, but generally not included. Referring
to rule of thumb #1, managers should only participate if they will directly
add value to the substance of the inspection. Inspections and reviews are
meant to improve software quality, not to access or manage people. If the
latter becomes the goal of the process, the inspections will be threatening
and participants will not be completely open.
should
be
developed
based
on
the
organizations
3. Distributing Materials
Prior to an inspection, all materials that will be used in the meeting should
be distributed. The lead time depends upon the size of the items to be
reviewed. In general, there should be time for each participant to
thoroughly review the component they are responsible for inspecting. The
author should provide the materials to the moderator. The moderator will
verify and distribute them.
The method of distribution for materials is dependent on the culture of the
development team and the type of inspection to be conducted. Many
participants may prefer hard copy for design elements, while others prefer
to navigate through electronic documents. Online access to code for
inspection may be preferred for access to search functions.
Inspection participants should be expected to bring any materials they
need for the review to the meeting. There should not be new material at
the meeting, if materials were incomplete, the review should be
rescheduled.
4. Entrance Criteria
Specific criteria should be required to qualify a product or artifact for an
inspection [COLL]. Typical entrance criteria would be that the product is
complete and would be delivered to the customer if the inspection finds
no defects.
2. Checklists
Inspections
cover
wide
variety
of
project
artifacts
including
requirements, design, code, and test plans. For each different type of
inspection it is useful for an organization to create standard checklists for
use by the inspectors [NASA1]. The checklists should be "living"
documents so an organization can improve on inspection practices and
develop core inspection knowledge.
3. Preparation Logs
During preparation, inspectors should log all defects found and track the
time spent on preparation. The use of a Preparation Log by each inspector
is a best practice. The preparation log should be given to the moderator
before the inspection meeting. The moderator can determine based on all
the logs whether the inspection team is adequately prepared for the
event.
3.Defect Log
An accurate log of all defects must be kept. The log should indicate what
the defect is, they defect type, where it was found, and the severity of the
defect (major, minor). The defect log will be used by the author to
prioritize and correct defects. It may also be used for metrics in a process
improvement effort. The Defect Log should not be used in evaluating
individuals. Its purpose is to improve software quality through honest and
open feedback, any other use could lead to problems.
The defect log may be sufficient for many projects. However, defects may
be somewhat complex and further information may need to be provided. A
defect report form can be used in conjunction with the defect log to
provide more information. In such circumstances, the defect log may
become the index to individual defect reports.
4. Inspection Report
times
on
many
work
products
during
the
given
product
development. How well the inspection teams do their job will decide
whether there is a net decrease in the overall development schedule and
an increase in net productivity, To carry out the inspection, there are
always five specific procedural roles are assigned.
1. Roles Responsibilities
Moderator
Reader
Recorder
Author
Inspector
Inspection
limited
if
possible.
of
the
process.
3. Participation of inspectors
I. Planning
Roles: - Moderator
- Author
II. Overview
Roles: - Moderator
- Author
- Inspectors
III. Preparation
Roles: - All inspectors
V. Discussion
3.3.6 Rework
Roles: - Author
3.3.7 Followup
Roles: - Moderator
- Author
REVIEWS
decision making
solving of conflicts (e.g. concerning design decisions)
exchange of information
brainstorming
Normally no formal procedure existsfor the execution and the choice
SOFTWARE TESTING
Introduction
Software Testing is the process of executing a program or system with the
intent of finding errors. Or, it involves any activity aimed at evaluating an
attribute or capability of a program or system and determining that it
meets its required results. Software is not unlike other physical processes
where inputs are received and outputs are produced. Where software
differs is in the manner in which it fails. Most physical systems fail in a
fixed (and reasonably small) set of ways. By contrast, software can fail in
many bizarre ways. Detecting all of the different failure modes for
software is generally infeasible.
Unlike most physical systems, most of the defects in software are design
errors, not manufacturing defects. Software does not suffer from
corrosion, wear-and-tear -- generally it will not change until upgrades, or
until obsolescence. So once the software is shipped, the design defects -or bugs -- will be buried in and remain latent until activation.
Software bugs will almost always exist in any software module with
moderate size: not because programmers are careless or irresponsible,
but because the complexity of software is generally intractable -- and
humans have only limited ability to manage complexity. It is also true that
for any complex systems, design defects can never be completely ruled
out.
Discovering the design defects in software, is equally difficult, for the
same reason of complexity. Because software and any digital systems are
not continuous, testing boundary values are not sufficient to guarantee
correctness. All the possible values need to be tested and verified, but
complete testing is infeasible. Exhaustively testing a simple program to
add only two integer inputs of 32-bits (yielding 2^64 distinct test cases)
would take hundreds of years, even if tests were performed at a rate of
thousands per second. Obviously, for a realistic software module, the
complexity can be far beyond the example mentioned here. If inputs from
the real world are involved, the problem will get worse, because timing
and unpredictable environmental effects and human interactions are all
possible input parameters under consideration.
A further complication has to do with the dynamic nature of programs. If a
failure occurs during preliminary testing and the code is changed, the
software may now work for a test case that it didn't work for previously.
But its behavior on pre-error test cases that it passed before can no longer
To improve quality.
As computers and software are used in critical applications, the outcome
of a bug can be severe. Bugs can cause huge losses. Bugs in critical
systems have caused airplane crashes, allowed space shuttle missions to
go awry, halted trading on the stock market, and worse. Bugs can kill.
Bugs can cause disasters. The so-called year 2000 (Y2K) bug has given
birth to a cottage industry of consultants and programming tools
dedicated to making sure the modern world doesn't come to a screeching
halt on the first day of the next century. In a computerized embedded
world, the quality and reliability of software is a matter of life and death.
Quality means the conformance to the specified design requirement.
Being correct, the minimum requirement of quality, means performing as
required under specified circumstances. Debugging, a narrow view of
software testing, is performed heavily to find out design defects by the
programmer. The imperfection of human nature makes it almost
impossible to make a moderately complex program correct the first time.
Finding the problems and get them fixed, is the purpose of debugging in
programming phase.
We cannot test quality directly, but we can test related factors to make
quality
visible.
Quality
has
three
sets
of
factors
-- functionality,
down
into
its
component
factors
and
considerations
at
Engineering
Adaptability (future
(interior quality)
Efficiency
Testability
Documentation
Structure
quality)
Flexibility
Reusability
Maintainability
Good testing provides measures for all relevant factors. The importance of
any particular factor varies from application to application. Any system
where
human
lives
are
at
stake
must
place
extreme
emphasis
Performance testing
Not all software systems have specifications on performance explicitly. But
every system will have implicit performance requirements. The software
should not take infinite time or infinite resource to execute. "Performance
bugs" sometimes are used to refer to those design problems in software
that cause the system performance to degrade.
testing
can
be
performance
bottleneck
identification,
Reliability testing
Software reliability refers to the probability of failure-free operation of a
system. It is related to many aspects of software, including the testing
process. Directly estimating software reliability by quantifying its related
factors can be difficult. Testing is an effective sampling method to
measure software reliability. Guided by the operational profile, software
testing (usually black-box testing) can be used to obtain failure data, and
an estimation model can be further used to analyze the data to estimate
the present reliability and predict future reliability. Therefore, based on the
estimation, the developers can decide whether to release the software,
and the users can decide whether to adopt and use the software. Risk of
using
software
can
also
be
assessed
based
on
reliability
Security testing
Software quality, reliability and security are tightly coupled. Flaws in
software can be exploited by intruders to open security holes. With the
development of the Internet, software security problems are becoming
even more severe.
Many critical software applications and services have integrated security
measures against malicious attacks. The purpose of security testing of
these systems include identifying and removing software flaws that may
potentially lead to security violations, and validating the effectiveness of
security measures. Simulated security attacks can be performed to find
vulnerabilities.
WHITE-BOX TESTING:
White-box testing is a verification technique software engineers can use to
examine if their code works as expected.
White-box testing is testing that takes into account the internal
mechanism of a system orcomponent (IEEE, 1990). White-box testing is
also known as structural testing, clear boxtesting, and glass box
testing.The connotations of clear box and glassbox appropriately
indicate that you have full visibility of the internal workings of thesoftware
product, specifically, the logic and the structure of the code.
While
white-box
testing
can
be
applied
at
the unit, integration and system levels of the software testing process, it is
usually done at the unit level. It can test paths within a unit, paths
between units during integration, and between subsystems during a
systemlevel test. Though this method of test design can uncover many
errors or problems, it might not detect unimplemented parts of the
specification or missing requirements.
White-box test design techniques include:
Branch testing
Path testing
Statement coverage
Decision coverage
1. Create test plans. Identify all white box test scenarios and
prioritize them.
2. Profile the application block. This step involves studying the
code at run time to understand the resource utilization, time spent
by various methods and operations, areas in code that are not
accessed, and so on.
3. Test the internal subroutines. This step ensures that the
subroutines or the nonpublic interfaces can handle all types of data
appropriately.
4. Test loops and conditional statements. This step focuses on
testing the loops and conditional statements for accuracy and
efficiency for different data inputs.
5. Perform security testing. White box security testing helps you
understand possible security loopholes by looking at the way the
code handles security.
The next sections describe each of these steps.
of Configuration
Manager class
of
the
CMAB
Scenario
1.3
Priority
High
Execution
details
Tools
required
Expected
results
Table 6.4: The CMAB Test Case Document for Testing the Code Coverage for
Read Method and All Invoked Methods
Scenario
1.4
Priority
High
Execution
details
Tools
required
Expected
results
Memory
allocation
pattern. You
can
profile
the
memory
tools.
Time taken for executing a code path. For scenarios where
performance is critical, you can profile the time they take. Timing a
code path may require custom instrumentation of the appropriate
code. There are also various tools available that help you measure
the
time
it
takes
for
particular
scenario
to
execute
by
block.
Profiling for excessive resource utilization. The input from a
performance test may show excessive resource utilization, such as
CPU, memory, disk I/O, or network I/O, for a particular usage
scenario. But you may need to profile the code to track the piece of
code that is blocking resources disproportionately.
The code analysis reveals that the function may fail for a certain
input value. For example, a function expecting numeric input may
fail for an input value of 0.
In the case of the CMAB, the function reads information from the
cache. The function returns the information appropriately if the
cache is not empty. However, if during the process of reading, the
cache is flushed or refreshed, the function may fail.
The subroutine does not handle an exception where the remote call
to a database is not successful. For example, in the CMAB, if the
function is trying the update the SQL Server information but the SQL
Provide input that results in executing the loop zero times. This can
be achieved where the input to the lower bound value of the loop is
less than the upper bound value.
Provide input that results in executing the loop one time. This can be
achieved where the lower bound value and upper bound value are
the same.
Provide input that the loop might iterate n, n-1, and n+1 times. The
out-of-bound loops (n-1 and n+1) are very difficult to detect with a
simple code review; therefore, there is a need to execute special
test cases that can simulate such cases.
When testing nested loops, you can start by testing the innermost loop,
with all other loops set to iterate a minimum number of times. After the
If
the
application
block
handles
sensitive
data
and
uses
Advantages
White-box testing is one of the two biggest testing methodologies used
today. It primarily has three advantages:
1. Side effects of having the knowledge of the source code is beneficial
to thorough testing.
Disadvantages
Although White-box testing has great advantages, it is not perfect and
contains some disadvantages. It has two disadvantages:
1. White-box testing brings complexity to testing because to be able to
test every important aspect of the program, you must have great
knowledge
of
the
program.
White-box
testing
requires
application from outside. Dont confuse with White box & Gray box, as in
the Gray box testing is tester doesnt have the knowledge in detailed. Also
the Gray box testing is not a black box testing method because the tester
knows some part of the internal structure of code. So Gray Box
Testing approach is the testing approach used when some knowledge of
internal structure but not in detailed.
The name is comes because the application for tester is like a gray box
like a transparent box and tester see inside it but not fully transparent &
can see partially in it. As tester doesnt require the access to code the
gray box testing is known as unbiased & non-intrusive.
To test the Web Services application usually the Gray box testing is used.
In this type of testing you have to test the application by disabling the
Java Script, it might be possible due to any reason Java Script is failed &
System get Invalid email to process & all assumptions made by system
will failed, as a result incorrect inputs are send to system, so
Advantages
1. Offers combined benefits of black box and white box testing
wherever possible.
2. Grey box testers don't rely on the source code; instead they rely on
interface definition and functional specifications.
3. Based on the limited information available, a grey box tester can
design excellent test scenarios especially around communication
protocols and data type handling.
4. The test is done from the point of view of the user and not the
designer.
Disadvantages
1. Since the access to source code is not available, the ability to go
over the code and test coverage is limited.
2. The tests can be redundant if the software designer has already run
a test case.
3. Testing every possible input stream is unrealistic because it would
take an unreasonable amount of time; therefore, many program
paths will go untested.