Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

ST1 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 148

Jagannath Institute of Management Sciences

Lajpat Nagar

BCA
Sem V

SOFTWARE TESTING
UNIT 1

Testing is the process of evaluating a

system or its component(s) with the


intent to find that whether it satisfies Unit
the specified requirements or not. This
activity results in the actual, expected Integration
and difference between their results. In
simple words testing is executing a System
system in order to identify any gaps,
errors or missing requirements in
contrary to the actual desire or
requirements.

According to ANSI/IEEE 1059 standard, Testing can be defined as “A


process of analyzing a software item to detect the differences between
existing and required conditions (that is defects/errors/bugs) and to
evaluate the features of the software item”.

Who does testing?

It depends on the process and the associated stakeholders of the


project(s). In the IT industry, large companies have a team with
responsibilities to evaluate the developed software in the context of the
given requirements. Moreover, developers also conduct testing which is
called Unit Testing. In most cases, following professionals are involved
in testing of a system within their respective capacities:

Software Tester

Software Developer
Project Lead/Manager End User
Different companies have difference designations for people who test the
software on the basis of their experience and knowledge such as Software
Tester, Software Quality Assurance Engineer, and QA Analyst etc.

It is not possible to test the software at any time during its cycle. The
next two sections state when testing should be started and when to end
it during the SDLC.

When to Start Testing

An early start to testing reduces the cost, time to rework and error free
software that is delivered to the client. However in Software Development
Life Cycle (SDLC) testing can be started from the Requirements Gathering
phase and lasts till the deployment of the software. However it also
depends on the development model that is being used. For example in
Water fall model formal testing is conducted in the Testing phase, but in
incremental model, testing is performed at the end of every
increment/iteration and at the end the whole application is tested.

Testing is done in different forms at every phase of SDLC like during Requirement
gathering phase, the analysis and verifications of requirements are also
considered testing. Reviewing the design in the design phase with intent to
improve the design is also considered as testing. Testing performed by a
developer on completion of the code is also categorized as Unit type of testing.

ii. When to Stop Testing

Unlike when to start testing it is difficult to determine when to stop testing, as


testing is a never ending process and no one can say that any software is 100%
tested. Following are the aspects which should be considered to stop the testing:

Testing Deadlines.

Completion of test case execution.

Completion of Functional and code coverage to a certain point.

Bug rate falls below a certain level and no high priority bugs are
identified.

Management decision.

Difference between Testing and Debugging

Testing: It involves the identification of bug/error/defect in the software


withoutcorrecting it. Normally professionals with a Quality Assurance
background are involved in the identification of bugs. Testing is
performed in the testing phase.
Debugging: It involves identifying, isolating and fixing the problems/bug.
Developerswho code the software conduct debugging upon encountering
an error in the code. Debugging is the part of White box or Unit Testing.
Debugging can be performed in the development phase while conducting
Unit Testing or in phases while fixing the reported bugs.

Testing Myths

Given below are some of the more popular and common myths about
Software testing.

Myth: Testing is too expensive.

Reality: There is a saying, pay less for testing during software development or
pay morefor maintenance or correction later. Early testing saves both time and
cost in many aspects however, reducing the cost without testing may result in
the improper design of a software application rendering the product useless.

Myth: Testing is time consuming.

Reality: During the SDLC phases testing is never a time consuming


process. However diagnosing and fixing the error which is identified
during proper testing is a time consuming but productive activity.

Myth: Testing cannot be started if the product is not fully developed.

Reality: No doubt, testing depends on the source code but reviewing


requirements and developing test cases is independent from the developed
code. However iterative or incremental approach as a development life cycle
model may reduce the dependency of testing on the fully developed software.

Myth: Complete Testing is Possible.

Reality: It becomes an issue when a client or tester thinks that complete


testing impossible. It is possible that all paths have been tested by the
team but occurrence of complete testing is never possible. There might
be some scenarios that are never executed by the test team or the client
during the software development life cycle and may be executed once
the project has been deployed.

Myth: If the software is tested then it must be bug free.

Reality: This is a very common myth which clients, Project Managers


and the management team believe in. No one can say with absolute
certainty that a software application is 100% bug free even if a tester
with superb testing skills has tested the application.

Myth: Missed defects are due to Testers.

Reality: It is not a correct approach to blame testers for bugs that remain in the
application even after testing has been performed. This myth relates to Time,
Cost, and Requirements changing Constraints. However the test
strategy may also result in bugs being missed by the testing team.

Myth: Testers should be responsible for the quality of a product.

Reality: It is a very common misinterpretation that only testers or the


testing team should be responsible for product quality. Tester’s
responsibilities include the identification of bugs to the stakeholders and
then it is their decision whether they will fix

the bug or release the software. Releasing the software at the time puts
more pressure on the testers as they will be blamed for any error.

Myth: Test Automation should be used wherever it is possible to use it


and to reduce time.

Reality: Yes it is true that Test Automation reduces the testing time but
it is not possible to start Test Automation at any time during Software
development. Test Automaton should be started when the software has
been manually tested and is stable to some extent. Moreover, Test
Automation can never be used if requirements keep changing.

Myth: Any one can test a Software application.

Reality: People outside the IT industry think and even believe that any
one can test the software and testing is not a creative job. However
testers know very well that this is myth. Thinking alternatives scenarios,
try to crash the Software with the intent to explore potential bugs is not
possible for the person who developed it.

Myth: A tester’s task is only to find bugs.

Reality: Finding bugs in the Software is the task of testers but at the same time
they are domain experts of the particular software. Developers are only
responsible for the specific component or area that is assigned to them but
testers understand the overall workings of the software, what the dependencies
are and what the impacts of one module on another module are.

Software Testing has different goals and objectives. The major


objectives of Software testing are as follows:

Finding defects which may get created by the programmer while


developing the software. 

Gaining confidence in and providing information about the level of
quality. 

To prevent defects. 

To make sure that the end result meets the business and user
requirements. 
To ensure that it satisfies the BRS that is Business Requirement
Specification and SRS that is System Requirement Specifications. 

To gain the confidence of the customers by providing them a quality 
product.
Software testing helps in finalizing the software application or product
against business and user requirements. It is very important to have
good test coverage in order to test the software application completely
and make it sure that it’s performing well and as per the specifications.

While determining the coverage the test cases should be designed well
with maximum possibilities of finding the errors or bugs. The test cases
should be very effective. This objective can be measured by the number
of defects reported per test cases. Higher the number of the defects
reported the more effective are the test cases.

Once the delivery is made to the end users or the customers they should be
able to operate it without any complaints. In order to make this happen the
tester should know as how the customers are going to use this product and
accordingly they should write down the test scenarios and design the test
cases. This will help a lot in fulfilling all the customer’s requirements.

Software testing makes sure that the testing is being done properly and hence
the system is ready for use. Good coverage means that the testing has been
done to cover the various areas like functionality of the application,
compatibility of the application with the OS, hardware and different types of
browsers, performance testing to test the performance of the application and
load testing to make sure that the system is reliable and should not crash or
there should not be any blocking issues. It also determines that the application
can be deployed easily to the machine and without any resistance. Hence the
application is easy to install, learn and use.

What is Defect or bugs or faults in software testing?


A defect is an error or a bug, in the application which is created. A
programmer while designing and building the software can make
mistakes or error. These mistakes or errors mean that there are flaws in
the software. These are called defects.

When actual result deviates from the expected result while testing a
software application or product then it results into a defect. Hence, any
deviation from the specification mentioned in the product functional
specification document is a defect. In different organizations it’s called
differently like bug, issue, incidents or problem.
When the result of the software application or product does not meet with the
end user expectations or the software requirements then it results into a Bug or
Defect. These defects or bugs occur because of an error in logic or in coding
which results into the failure or unpredicted or unanticipated results.
Additional Information about Defects / Bugs:

While testing a software application or product if large number of defects


are found then it’s called Buggy.

When a tester finds a bug or defect it’s required to convey the same to
the developers. Thus they report bugs with the detail steps and are
called as Bug Reports, issue report, problem report, etc.

This Defect report or Bug report consists of the following information:

Defect ID – Every bug or defect has it’s unique identification number.

Defect Description – This includes the abstract of the issue.

Product Version – This includes the product version of the application


in which the defect is found.
Detail Steps – This includes the detailed steps of the issue with the
screenshots attached so that developers can recreate it.
Date Raised – This includes the Date when the bug is reported
Reported By – This includes the details of the tester who reported the
bug like Name and ID
Status – This field includes the Status of the defect like New, Assigned,
Open, Retest, Verification, Closed, Failed, Deferred, etc.
Fixed by – This field includes the details of the developer who fixed it
like Name and ID
Date Closed – This includes the Date when the bug is closed
Severity – Based on the severity (Critical, Major or Minor) it tells us
about impact of the defect or bug in the software application
Priority – Based on the Priority set (High/Medium/Low) the order of
fixing the defect can be made.
Why is software testing necessary?

Software Testing is necessary because we all make mistakes. Some of


those mistakes are unimportant, but some of them are expensive or
dangerous. We need to check everything and anything we produce
because things can always go wrong – humans make mistakes all the time.

Since we assume that our work may have mistakes, hence we all need to check
our own work. However some mistakes come from bad assumptions and blind
spots, so we might make the same mistakes when we check our own work as we
made when we did it. So we may not notice the flaws in what we have done.

Ideally, we should get someone else to check our work because


another person is more likely to spot the flaws.

There are several reasons which clearly tells us as why Software


Testing is important and what are the major things that we should
consider while testing of any product or application.

Software testing is very important because of the following reasons:

 Software testing is really required to point out the defects and errors
 that were made during the development phases. 
 It’s essential since it makes sure of the Customer’s reliability and
 their satisfaction in the application. 
 It is very important to ensure the Quality of the product. Quality
 product delivered to the customers helps in gaining their confidence. 
 Testing is necessary in order to provide the facilities to the
customers like the delivery of high quality product or software
application which requires lower maintenance cost and hence
 results into more accurate, consistent and reliable results. 
 Testing is required for an effective performance of software
 application or product. 
 It’s important to ensure that the application should not result into any
failures because it can be very expensive in the future or in the later
 stages of the development. 
 It’s required to stay in the business. 

Principles of Testing

There are seven principles of testing. They are as follows:

1) Testing shows presence of defects: Testing can show the defects are
present, but cannot prove that there are no defects. Even after testing the
application or product thoroughly we cannot say that the product is 100% defect
free. Testing always reduces the number of undiscovered defects remaining in
the software but even if no defects are found, it is not a proof of correctness.
2) Exhaustive testing is impossible: Testing everything including all
combinations of inputs and preconditions is not possible. So, instead of doing
the exhaustive testing we can use risks and priorities to focus testing efforts.
For example: In an application in one screen there are 15 input fields, each
having 5 possible values, then to test all the valid combinations you would
15
need 30 517 578 125 (5 ) tests. This is very unlikely that the project
timescales would allow for this number of tests. So, accessing and managing
risk is one of the most important activities and reason for testing in any project.

3) Early testing: In the software development life cycle testing activities should
start as early as possible and should be focused on defined objectives.

4) Defect clustering: A small number of modules contains most of the defects


discovered during pre-release testing or shows the most operational failures.

5) Pesticide paradox: If the same kinds of tests are repeated again and
again, eventually the same set of test cases will no longer be able to find
any new bugs. To overcome this “Pesticide Paradox”, it is really very
important to review the test cases regularly and new and different tests
need to be written toEXERCISE different parts of the software or system
to potentially find more defects.

6) Testing is context depending: Testing is basically context


dependent. Different kinds of sites are tested differently. For example,
safety – critical software is tested differently from an e-commerce site.

7) Absence – of – errors fallacy: If the system built is unusable and


does not fulfil the user’s needs and expectations then finding and fixing
defects does not help.

Software Testing Life Cycle STLC


Software Testing Life Cycle (STLC) is the testing process which is executed
in systematic and planned manner. In STLC process, different activities are
carried out to improve the quality of the product. Let’s quickly see what all
stages are involved in typical Software Testing Life Cycle (STLC).

Following steps are involved in Software Testing Life Cycle (STLC).


Each step is have its own Entry Criteria and deliverable.

∑ Requirement Analysis
∑ Test Planning
∑ Test Case Development
∑ Environment Setup
∑ Test Execution
∑ Test Cycle Closure
Ideally, the next step is based on previous step or we can say next step
cannot be started unless and until previous step is completed. It is
possible in the ideal situation, but practically it is not always true.

So Let’s discuss what all activities and deliverable are involved in the
each step in detailed.

Requirement Analysis:

Requirement Analysis is the very first step in Software Testing Life


Cycle (STLC). In this step Quality Assurance (QA) team understands
the requirement in terms of what we will testing & figure out the testable
requirements. If any conflict, missing or not understood any requirement,
then QA team follow up with the various stakeholders like Business
Analyst, System Architecture, Client, Technical Manager/Lead etc to
better understand the detail knowledge of requirement.

From very first step QA involved in the where STLC which helps to
prevent the introducing defects into Software under test. The
requirements can be either Functional or Non-Functional like
Performance, Security testing. Also requirement and Automation
feasibility of the project can be done in this stage (if applicable)
Entry Criteria Activities Deliverable

Following documents Prepare the list of questions List of questions with all
should be available: or queries and get resolved answers to be resolved from
from Business Analyst, business i.e. testable
System Architecture, requirements
– Requirements
Client, Technical
Specification.
Manager/Lead etc.
Automation feasibility
report (if applicable)
– Application architectural
Make out the list for what
all Types of Tests
Along with above performed like Functional,
documents Acceptance Security, and Performance
criteria should be well etc.
defined.
Define the testing focus
and priorities.

List down the Test


environment details
where testing activities
will be carried out.

Checkout the Automation


feasibility if required &
prepare the Automation
feasibility report.

Test Planning:

Test Planning is most important phase of Software testing life cyclewhere all
testing strategy is defined. This phase also called as Test Strategy phase. In
this phase typically Test Manager (or Test Lead based on company to
company) involved to determine the effort and cost estimates for entire project.
This phase will be kicked off once the requirement gathering phase is
completed & based on the requirement analysis, start preparing the Test Plan.
The Result of Test Planning phase will be Test Plan or Test strategy & Testing
Effort estimation documents. Once test planning phase is completed the QA
team can start with test cases development activity.
Get the Sample Test Plan Template here.

Entry Criteria Activities Deliverable

Requirements Documents Define Objective & scope Test Plan or Test strategy
(Updated version of unclear of the project.List down the document.
or missing requirement). testing types involved in the
STLC.Test effort estimation
Testing Effort
and resource
Automation feasibility estimationdocument.
planning.Selection of
report.
Testing tool if required.

Define the testing process


overview.Define the test
environment required for
entire project.Prepare the
test schedules.Define the
control
procedures.Determining
roles and
responsibilities.List down
the testing
deliverable.Define the
entry criteria, suspension
criteria, resumption criteria
and exit criteria.Define the
risk involved if any.

Test Case Development:

The test case development activity is started once the test planning
activity is finished. This is the phase of STLC where testing team write
down the detailed test cases. Along with test cases testing team also
prepare the test data if any required for testing. Once the test cases are
ready then these test cases are reviewed by peer members or QA lead.

Also the Requirement Traceability Matrix (RTM) is prepared. The


Requirement Traceability Matrix is an industry-accepted format for tracking
requirements where each test case is mapped with the requirement. Using
this RTM we can track backward & forward traceability.
Entry Criteria Activities Deliverable

Requirements Documents Preparation of test cases Test cases.


(Updated version of unclear
or missing requirement).
Preparation of test Test data.
automation scripts (if
Automation feasibility required).
Test Automation Scripts (if
report.
required).
Re-requisite test data
preparation for executing
test cases.

Test Environment Setup:

Setting up the test environment is vital part of the STLC. Basically test
environment decides on which conditions software is tested. This is
independent activity and can be started parallel with Test Case Development.
In process of setting up testing environment test team is not involved in it.
Based on company to company may be developer or customer creates the
testing environment. Mean while testing team should prepare the smoke test
cases to check the readiness of the test environment setup.

Entry Criteria Activities Deliverable

Test Plan is available. Analyze the requirements Test Environment will be


and prepare the list of ready with test data.
Software & hardware
Smoke Test cases are
required to set up test
available. Result of Smoke Test cases.
environment.

Test data is available.


Setup the test environment.

Once the Test Environment


is setup execute the Smoke
test cases to check the
readiness of the test
environment.
Test Execution:

Once the preparation of Test Case Development and Test Environment


setup is completed then test execution phase can be kicked off. In this
phase testing team start executing test cases based on prepared test
planning & prepared test cases in the prior step.

Once the test case is passed then same can be marked as Passed. If any
test case is failed then corresponding defect can be reported to developer
team via bug tracking system & bug can be linked for corresponding test
case for further analysis. Ideally every failed test case should be associated
with at least single bug. Using this linking we can get the failed test case
with bug associated with it. Once the bug fixed by development team then
same test case can be executed based on your test planning.

If any of the test cases are blocked due to any defect then such test cases can
be marked as Blocked, so we can get the report based on how many test
cases passed, failed, blocked or not run etc. Once the defects are fixed, same
Failed or Blocked test cases can be executed again to retest the functionality.

Entry Criteria Activities Deliverable

Test Plan or Test strategy Based on test planning Test case execution report.
document. execute the test cases.
Defect report.
Test cases. Mark status of test cases
like Passed, Failed,
Blocked, Not Run etc.
Test data.

Assign Bug Id for all Failed


and Blocked test cases.

Do Retesting once the


defects are fixed.

Track the defects to closure.


Test Cycle Closure:

Call out the testing team member meeting & evaluate cycle
completion criteria based on Test coverage, Quality, Cost, Time,
Critical Business Objectives, and Software. Discuss what all went
good, which area needs to be improve & taking the lessons from
current STLC as input to upcoming test cycles, which will help to
improve bottleneck in the STLC process. Test case & bug report
will analyze to find out the defect distribution by type and severity.
Once complete the test cycle then test closure report & Test
metrics will be prepared. Test result analysis to find out the defect
distribution by type and severity.

Entry Criteria Activities Deliverable

Test case execution is Evaluate cycle completion Test Closure report


completed criteria based on Test
coverage, Quality, Cost,
Test metrics
Time, Critical Business
Test case Execution report
Objectives, and Software
Prepare test metrics based
Defect report on the above parameters.

Prepare Test closure report

Share best practices for any


similar projects in future

UNIT 2

TESTING TECHNIQUES AND STRATEGY

Manual Testing
This type includes the testing of the Software manually i.e. without using any

automated tool or any script. In this type the tester takes over the role of an end
user and test the Software to identify any un-expected behavior or bug. There
are different stages for manual testing like unit testing, Integration testing,
System testing and User Acceptance testing.

Testers use test plan, test cases or test scenarios to test the Software to ensure
the completeness of testing. Manual testing also includes exploratory testing as
testers explore the software to identify errors in it.

Automation Testing

Test
Script

Test
Execution

Test
Automation

Automation testing which is also known as “Test Automation”, is when the


tester writes scripts and uses another software to test the software. This process
involves automationof a manual process. Automation Testing is used to re-run
the test scenarios that were performed manually, quickly and repeatedly.

Apart from regression testing, Automation testing is also used to test the
application from load, performance and stress point of view. It increases the test
coverage; improve accuracy, saves time and money in comparison to manual
testing.

What to automate: It is not possible to automate everything in the Software;


however the areas at which user can make transactions such as login form or
registration forms etc, any area where large amount of users‟ can access the
Software simultaneously should be automated.

Furthermore all GUI items, connections with databases, field validations etc.
can be efficiently tested by automating the manual process.

When to Automate: Test Automation should be uses by considering the


following for the Software:
 
 Large and critical projects.
 
 Projects that require testing the same areas frequently.
 
Requirements not changing frequently.


 Accessing the application for load and performance with many virtual
users.
 
 Stable Software with respect to manual testing.
 
Availability of time.

How to Automate: Automation is done by using a supportive computer


language like VB scripting and an automated software application. There are a
lot of tools available which can be used to write automation scripts. Before
mentioning the tools lets identify the process which can be used to automate the
testing:

 Identifying areas within a software for automation.

 Selection of appropriate tool for Test automation.

 Writing Test scripts.

 Development of Test suits.

 Execution of scripts

 Create result reports.

Identify any potential bug or performance issue.
Following are the tools which can be used for Automation testing:
 HP Quick Test Professional

 Selenium

 IBM Rational Functional Tester

 SilkTest

 TestComplete

 Testing Anywhere

 WinRunner

 LaodRunner

 Visual Studio Test Professional

WATIR

Testing Methods

There are different methods which can be used for Software testing. This
chapter briefly describes those methods.

Black Box Testing

The technique of testing without having any knowledge of the interior workings
of the application is Black Box testing. The tester is oblivious to the system
architecture and does not have access to the source code. Typically, when
performing a black box test, a tester will interact with the system‟s user
interface by providing inputs and examining outputs without knowing how and
where the inputs are worked upon.

Advantages:
 
 Well suited and efficient for large code segments.
 
Code Access not required.


Clearly separates
 user‟s perspective from the developer‟s perspective through visibly
defined roles.



Large numbers of moderately skilled testers can test the application with no
knowledge of implementation, programming language or operating
systems.

Disadvantages:


 Coverage since only a selected number of test scenarios are actually
Limited
performed.


 due to the fact that the tester only has limited knowledge
Inefficient testing,
about an application.


 Coverage, since the tester cannot target specific code segments or error
Blind
prone areas.
 
The test cases are difficult to design.

White Box Testing

White box testing is the detailed investigation of internal logic and structure of
the code. White box testing is also called glass testing or open box testing. In
order to perform

white box testing on an application, the tester needs to possess knowledge of


the internal working of the code. The tester needs to have a look inside the
source code and find out which unit/chunk of the code is behaving
inappropriately.

Advantages:

As the tester has knowledge of the source code, it becomes very easy to find out
which type of data can help in testing the application effectively.
 
 It helps in optimizing the code.
 
Extra lines of code can be removed which can bring in hidden defects.



Due to the tester's knowledge about the code, maximum coverage is attained during
test scenario writing.

Disadvantages:


Due to the 
fact that a skilled tester is needed to perform white box testing, the costs are
increased.


Sometimes it is impossible to look into every nook and corner to find out
hidden errors that may create problems as many paths will go untested.

It is difficult to maintain white box testing
as the use of specialized tools like code
analyzers and debugging tools are required.

Grey Box Testing

Grey Box testing is a technique to test the application with limited knowledge
of the internal workings of an application. In software testing, the term “the
more you know the better” carries a lot of weight when testing an application.

Mastering the domain of a system always gives the tester an edge over someone
with limited domain knowledge. Unlike black box testing, where the tester only
tests the application‟s user interface, in grey box testing, the tester has access to
design documents and the database. Having this knowledge, the tester is able to
better prepare test data and test scenarios when making the test plan.

Advantages:

 
Offers combined benefits of black box and white box testing wherever possible.


Grey box testers don‟t rely on the source  code; instead they rely on interface
definition and functional specifications.


Based on the limited information available, a grey box tester can design
  especially around communication protocols and
excellent test scenarios
data type handling.
 
The test is done from the point of view of the user and not the designer.

Disadvantages:


 code is not available, the ability to go over the code and test
Since the access to source
coverage is limited.

 
The tests can be redundant if the software designer has already run a test case.


Testing every possible input stream is unrealistic because it would take an
 amount of time; therefore, many program paths will go
unreasonable
untested.
Internals

Internals
Fully
Partially
Known
Known

Visual Difference between the Three Testing Methods

Comparison between the Three Testing Types

Black Box Testing Grey Box Testing White Box Testing


The Internal Workings Somewhat knowledge
1. of of Tester has full
the internal workings
an application are not are knowledge of the
required to be known known Internal workings of
the application
Also known as closed Another term for grey
2. box box Also known as clear
testing, data driven testing is translucent box testing, structural
testing and functional testing as the tester has testing or code based
limited knowledge of
testing the testing
insides of the
application
3. Performed by end users Performed by end users Normally done by
and also by testers and and also by testers and testers and developers
developers developers
4. -Testing is based on Testing is done on the Internal workings are
external expectations basis of high level fully known and the
-Internal behavior of
the database diagrams and tester can design test
application is unknown data flow diagrams data accordingly
5. This is the least time Partly time consuming The most exhaustive
consuming and and exhaustive and time consuming
exhaustive type of testing
6. Not suited to algorithm Not suited to algorithm Suited for algorithm
testing testing testing

This can only be done


7. by Data domains and Data domains and
trial and error method Internal boundaries can Internal boundaries
be tested, if known can be better tested
Levels of Testing

There are different levels during the process of Testing. In this chapter a brief
description is provided about these levels.

Levels of testing include the different methodologies that can be used while
conducting Software Testing. Following are the main levels of Software
Testing:

 Functional Testing. 

 Non- functional Testing. 

Functional Testing

This is a type of black box testing that is based on the specifications of the
software that is to be tested. The application is tested by providing input and
then the results are examined that need to conform to the functionality it was
intended for. Functional Testing of the software is conducted on a complete,
integrated system to evaluate the system's compliance with its specified
requirements. There are five steps that are involved when testing an application
for functionality.


Step I - The determination of the functionality that the intended
 application is meant to perform.



Step II - The creation of test data based on the specifications of the
 application.





output based on the test data and the specifications of the
Step III - The
 application.
 
 Step IV - The writing of Test Scenarios and the execution of test cases.




Steps V - The comparison of actual and expected results based on the
executed test cases.
An effective testing practice will see the above steps applied to the testing
policies of every organization and hence it will make sure that the organization
maintains the strictest of standards when it comes to software quality.

Unit Testing
This type of testing is performed by the developers before the setup is handed
over to the testing team to formally execute the test cases. Unit testing is
performed by the respective developers on the individual units of source code
assigned areas. The developers use test data that is separate from the test data of
the quality assurance team.

The goal of unit testing is to isolate each part of the program and show that
individual parts are correct in terms of requirements and functionality.

Limitations of Unit Testing

Testing cannot catch each and every bug in an application. It is impossible to


evaluate every execution path in every software application. The same is the
case with unit testing. There is a limit to the number of scenarios and test data
that the developer can use to verify the source code. So after he has exhausted
all options there is no choice but to stop unit testing and merge the code
segment with other units.

Integration Testing

The testing of combined parts of an application to determine if they function


correctly together is Integration testing. There are two methods of doing
Integration Testing Bottom-up Integration testing and Top Down Integration
testing.


Bottom-up integration testing begins with unit testing, followed by
 tests of progressively higher-level combinations of units called modules

or builds.

Top-Down integration testing, the highest-level modules are tested first
 and progressively lower-level modules are tested after that. In a
comprehensive software development environment, bottom-up testing is

usually done first, followed by top-down testing.

System Testing

This is the next level in the testing and tests the system as a whole. Once all the
components are integrated, the application as a whole is tested rigorously to see
that it meets Quality Standards. This type of testing is performed by a
specialized testing team.

Why is System Testing so Important


System Testing is the first step in the Software  Development Life
 Cycle, where the application is tested as a whole.



The application is tested thoroughly to verify that it meets the
 functional and technical specifications.



The application is tested in an environment which is very closeto the
 production environment where the application will be deployed.



System Testing enables us to test, verify and validate 
both the business
requirements as well as the Applications Architecture.

Regression Testing
Whenever a change in a software application is made it is quite possible that
other areas within the application have been affected by this change. To verify
that a fixed bug hasn‟t resulted in another functionality or business rule
violation is Regression testing. The intent of Regression testing is to ensure that
a change, such as a bug fix did not result in another fault being uncovered in the
application.

Why is System Testing so Important


Minimize the gaps in testing when an application with changes made
 has to betested.



Testing the new changes to verify  that the change made did not affect
 anyother area of the application.




Mitigates Risks when regression testing is performed on the
 application.

 
 Test coverage is increased without compromising timelines.

 
Increase speed to market the product.

Acceptance Testing

This is arguably the most importance type of testing as it is conducted by the


Quality Assurance Team who will gauge whether the application meets the
intended specifications and satisfies the client‟s requirements. The QA team
will have a set of pre written scenarios and Test Cases that will be used to test
the application.

More ideas will be shared about the application and more tests can be
performed on it to gauge its accuracy and the reasons why the project was
initiated. Acceptance tests are not only intended to point out simple spelling
mistakes, cosmetic errors or Interface gaps, but also to point out any bugs in the
application that will result in system crashers or major errors in the application.

By performing acceptance tests on an application the testing team will deduce


how the application will perform in production. There are also legal and
contractual requirements for acceptance of the system.

Alpha Testing

This test is the first stage of testing and will be performed amongst the teams
(developer and QA teams). Unit testing, integration testing and system testing
when combined are known as alpha testing. During this phase, the following
will be tested in the application:

 
 Spelling Mistakes

 
 Broken Links

 
 Cloudy Directions



The Application will be tested on machines withthe lowest specification
to test loading times and any latency problems.

Beta Testing

This test is performed after Alpha testing has been successfully performed. In
beta testing a sample of the intended audience tests the application. Beta testing
is also known as pre-release testing. Beta test versions of software are ideally
distributed to a wide audience on the Web, partly to give the program a "real-
world" test and partly to provide a preview of the next release. In this phase the
audience will be testing the following:
 
Users will install, run the application and send their feedback to the project team.
 
Typographical errors, confusing application flow, and even crashes.



Getting the feedback, the project team can fix the problems before releasing the
software to the actual users.


The more issues you fix that solve real user problems, the higher the quality of your
application will be.



Having a higher-quality application when you release to the general public will
increase customer satisfaction.

Non-Functional Testing

This section is based upon the testing of the application from its non-functional
attributes. Non-functional testing of Software involves testing the Software
from the requirements which are non-functional in nature related but important
a well such as performance, security, user interface etc. Some of the important
and commonly used non-functional testing types are mentioned as follows.

Performance Testing

It is mostly used to identify any bottlenecks or performance issues rather than


finding the bugs in software. There are different causes which contribute in
lowering the performance of software:

 
 Network delay.

 
 Client side processing.

 
 Database transaction processing.

 
 Load balancing between servers.

 
Data rendering.

Performance testing is considered as one of the important and mandatory


testing type in terms of following aspects:

Speed (i.e. Response Time, data rendering and accessing)
 
 Capacity

 
 Stability

 
Scalability

It can be either qualitative or quantitative testing activity and can be divided


into different sub types such as

Load testing and Stress testing.

Load Testing

A process of testing the behavior of the Software by applying maximum load in


terms of Software accessing and manipulating large input data. It can be done at
both normal and peak load conditions. This type of testing identifies the
maximum capacity of Software and its behavior at peak time.

Most of the time, Load testing is performed with the help of automated tools
such as Load Runner, AppLoader, IBM Rational Performance Tester, Apache
JMeter, Silk Performer, Visual Studio Load Test etc.

Virtual users (VUsers) are defined in the automated testing tool and the script is
executed to verify the Load testing for the Software. The quantity of users can
be increased or decreased concurrently or incrementally based upon the
requirements.

Stress Testing

This testing type includes the testing of Software behavior under abnormal
conditions. Taking away the resources, applying load beyond the actual load
limit is Stress testing.

The main intent is to test the Software by applying the load to the system and
taking over the resources used by the Software to identify the breaking point.
This testing can be performed by testing different scenarios such as:

 
 Shutdown or restart of Network ports randomly.

 
 Turning the database on or off.




Running different processes that consume resources such as CPU,
Memory, server etc.

Usability Testing

This section includes different concepts and definitions of Usability testing


from Software point of view. It is a black box technique and is used to identify
any error(s) and improvements in the Software by observing the users through
their usage and operation.

According to Nielsen, Usability can be defined in terms of five factors i.e.


Efficiency ofuse, Learn-ability, Memor-ability, Errors/safety, satisfaction.
According to him theusability of the product will be good and the system is
usable if it possesses the above factors.

Nigel Bevan and Macleod considered thatUsability is the quality


requirementwhich can be measured as the outcome of interactions with a
computer system.

This requirement can be fulfilled and the end user will be satisfied if the
intended goals are achieved effectively with the use of proper resources.

Molich in 2000 stated that user friendly system should fulfill the following five
goals i.e.

Easy to Learn, Easy to Remember, Efficient to Use, Satisfactory to Use and


Easy to Understand.
In addition to different definitions of usability, there are some standards and
quality models and methods which define the usability in the form of attributes
and sub attributes such as ISO-9126, ISO-9241-11, ISO-13407 and IEEE
std.610.12 etc.
Difference between UI and Usability Testing
UI testing involves the testing of Graphical User Interface of the Software. This
testing ensures that the GUI should be according to requirements in terms of
color, alignment, size and other properties.

On the other hand Usability testing ensures that a good and user friendly GUI is
designed and is easy to use for the end user. UI testing can be considered as a
sub part of Usability testing.

Security Testing

Security testing involves the testing of Software in order to identify any flaws
ad gaps from security and vulnerability point of view. Following are the main
aspects which Security testing should ensure:
 
 Confidentiality.
 
 Integrity.
 
 Authentication.
 
Availability.
 
 Authorization.
 
 Non-repudiation.
 
 Software is secure against known and unknown vulnerabilities.
 
 Software data is secure.
 
 Software is according to all security regulations.
 
 Input checking and validation.
 
 SQL insertion attacks.
 
 Injection flaws.
 
 Session management issues.
 
 Cross-site scripting attacks.
 
 Buffer overflows vulnerabilities.
 
Directory traversal attacks.

Portability Testing

Portability testing includes the testing of Software with intend that it should be re-useable and
can be moved from another Software as well. Following are the strategies that can be used for
Portability testing.

 
 Transferred installed Software from one computer to another.

 
Building executable (.exe) to run the Software on different platforms.

Portability testing can be considered as one of the sub parts of System testing, as this testing type
includes the overall testing of Software with respect to its usage over different environments.
Computer Hardware, Operating Systems and Browsers are the major focus of Portability testing.
Following are some pre-conditions for Portability testing:
 
 Software should be designed and coded, keeping in mind Portability Requirements.
 
 Unit testing has been performed on the associated components.
 
 Integration testing has been performed.
 
Test environment has been established.

Basis Path Testing : Basis path testing is a white box testing technique first proposed by Tom
McCabe. The Basis path method enables to derive a logical complexity measure of a procedural
design and use this measure as a guide for defining a basis set of execution paths. Test Cases
derived to exercise the basis set are guaranteed to execute every statement in the program at least
one time during testing.

Flow Graph Notation: The flow graph depicts logical control flow using a diagrammatic
notation. Each structured construct has a corresponding flow graph symbol.

On a flow graph:

 Arrows called edges represent flow of control 


 Circles called nodes represent one or more actions. 
 Areas bounded by edges and nodes called regions. 
 A predicate node is a node containing a condition 

Any procedural design can be translated into a flow graph.

Note that compound boolean expressions at tests generate at least two predicate node and
additional arcs.

Example:

Cyclomatic Complexity: is a software metric that provides a quantitative measure of the logical
complexity of a program. When used in the context of a basis path testing method, the value
computed for Cyclomatic complexity defines the number for independent paths in the basis set of
a program and provides us an upper bound for the number of tests that must be conducted to
ensure that all statements have been executed at least once.
An independent path is any path through the program that introduces at least one new set of
processing statements or a new condition.

Cyclomatic complexity has a foundation in graph theory and provides us with extremely useful
software metric. Complexity is computed in one of the three ways:

1. The number of regions of the flow graph corresponds to the Cyclomatic complexity.

2. Cyclomatic complexity, V(G), for a flowgraph, G is defined as


V (G) = E-N+2P Where E, is the number of flow graph edges, N is the number of flow graph
nodes, P is independent component.

3. Cyclomatic complexity, V (G) for a flow graph, G is also defined as:


V (G) = Pie+1 where Pie is the number of predicate nodes contained in the flow graph G.

The cyclomatic complexity gives a quantitative measure of the logical complexity.

This value gives the number of independent paths in the basis set, and an upper bound for the
number of tests to ensure that each statement and both sides of every condition is executed at
least once.

An independent path is any path through a program that introduces at least one new set of
processing statements (i.e., a new node) or a new condition (i.e., a new edge)
1: WHILE NOT EOF LOOP
2: Read Record;
2: IF field1 equals
0 THEN
3: Add field1 to
Total
3: Increment
Counter
4: ELSE
4: IF field2
equals 0 THEN
5: Print
Total, Counter
5: Reset
Counter
6: ELSE
6: Subtract
field2 from Total
7: END IF
8: END IF
8: Print "End
Record"
9: END LOOP
9: Print Counter
<=""
p="">

Example has:

 Independent Paths: 
1. 1, 9
2. 1, 2, 3, 8, 1, 9
3. 1, 2, 4, 5, 7, 8, 1, 9
4. 1, 2, 4, 6, 7, 8, 1, 9
 Cyclomatic Complexity of 4; computed using any of these 3 formulas: 
1. #Edges - #Nodes + #terminal vertices (usually 2)
2. #Predicate Nodes + 1
3. Number of regions of flow graph.

Cyclomatic complexity provides upper bound for number of tests required to guarantee
coverage of all program statements.

Could we omit path #1 since it's covered in #2?

Deriving Test Cases

1. Using the design or code, draw the corresponding flow graph.


2. Determine the cyclomatic complexity of the flow graph.
3. Determine a basis set of independent paths.
4. Prepare test cases that will force execution of each path in the basis set.

Note: some paths may only be able to be executed as part of another test.

Graph Matrices: The procedure for deriving the flow graph and even determining a set of basis paths is
amenable to mechanization. To develop a software tool that assists in basis path testing, a data structure,
called a graph matrix can be quite useful.

A Graph Matrix is a square matrix whose size is equal to the number of nodes on the flow graph. Each
row and column corresponds to an identified node, and matrix entries correspond to connections between
nodes.

A Graph Matrix is a square matrix whose size is equal to the number of nodes on the flow graph.
Each row and column corresponds to an identified node, and matrix entries correspond to
connections between nodes. The connection matrix can also be used to find the cyclomatic
complexity

Control Structure Testing: Described below are some of the variations of Control Structure Testing:

Condition Testing: Condition testing is a test case design method that exercises the logical conditions
contained in a program module.

Data Flow Testing: The data flow testing method selects test paths of a program according to the
locations of definitions and uses of variables in the program.

Loop Testing: Loop Testing is a white box testing technique that focuses exclusively on the validity of
loop constructs. Four classes of loops can be defined: Simple loops, Concatenated loops, nested loops,
and unstructured loops.
Simple Loops: The following sets of tests can be applied to simple loops, where „n‟ is the maximum
number of allowable passes through the loop.

1. Skip the loop entirely.

2. Only one pass through the loop.

3. Two passes through the loop.

4. „m‟ passes through the loop where m is less than n.

5. n-1, n, n+1 passes through the loop.

Nested Loops: If we extend the test approach from simple loops to nested loops, the number of possible

tests would grow geometrically as the level of nesting increases.

1. Start at the innermost loop. Set all other loops to minimum values.

2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimum
iteration parameter values. Add other tests for out-of-range or exclude values.

3. Work outward, conducting tests for the next loop, but keep all other outer loops at minimum values and
other nested loops to "typical" values.

4. Continue until all loops have been tested.

Concatenated Loops: Concatenated loops can be tested using the approach defined for simple loops, if
each of the loops is independent of the other. However, if two loops are concatenated and the loop counter
for loop 1 is used as the initial value for loop 2, then the loops are not independent.
Unstructured Loops: Whenever possible, this class of loops should be redesigned to reflect the use of
the structured programming constructs.
What is Black Box Testing?

Firstly we will learn what is Black Box Testing? Here we will discuss about how black box
testing is perform, different BBT Techniques used in testing.

Black Box Testing Method:

Black box testing is the Software testing method which is used to test the software without
knowing the internal structure of code or program.

Most likely this testing method is what most of tester actual perform and used the majority in the
practical life.

Basically software under test is called as “Black-Box”, we are treating this as black box &
without checking internal structure of software we test the software. All testing is done as
customer‟s point of view and tester is only aware of what is software is suppose to do but how
these requests are processing by software is not aware. While testing tester is knows about the
input and expected output‟s of the software and they do not aware of how the software or
application actually processing the input requests & giving the outputs. Tester only passes valid
as well as invalid inputs & determines the correct expected outputs. All the test cases to test
using such method are calculated based on requirements & specifications document.

The main purpose of the Black Box is to check whether the software is working as per expected
in requirement document & whether it is meeting the user expectations or not.

There are different types of testing used in industry. Each testing type is having its own
advantages & disadvantages. So fewer bugs cannot be find using the black box testing or white
box testing.

Types of Black Box Testing Techniques: Following black box testing techniques are used for
testing the software application.

 Boundary Value Analysis (BVA) 


  Equivalence Class Partitioning 
 Decision Table based testing 
 Cause-Effect Graphing Technique 
 Error Guessing 

Boundary value analysis and Equivalence Class Partitioning both are test case design techniques in
black box testing

1) Boundary Value Analysis (BVA):

Boundary Value Analysis is the most commonly used test case design method for black box
testing. As all we know the most of errors occurs at boundary of the input values. This is one of
the techniques used to find the error in the boundaries of input values rather than the center of the
input value range.

Boundary Value Analysis is the next step of the Equivalence class in which all test cases
are design at the boundary of the Equivalence class.

Let us take an example to explain this:

Suppose we have software application which accepts the input value text box ranging from 1
to 1000, in this case we have invalid and valid inputs:

Invalid Input Valid Input Invalid Input

0 – less 1 – 1000 1001 – above

Here are the Test cases for input box accepting numbers using Boundary value analysis:

Min value – 1 0
Min Value 1
Min value + 1 2
Normal Value 1 – 1000
Max value – 1 999
Max value 1000
Max value +1 1001

This is testing techniques is not applicable only if input value range is not fixed i.e. the boundary
of input is not fixed.

2) Equivalence Class Partitioning

The equivalence class partition is the black box test case design technique used for writing test
cases. This approach is use to reduce huge set of possible inputs to small but equally effective
inputs. This is done by dividing inputs into the classes and gets one value from each class. Such
method is used when exhaustive testing is most wanted & to avoid the redundancy of inputs.
In the equivalence partitioning input are divided based on the input values:

 If input value is Range, then we one valid equivalence class & two invalid equivalence
 classes. 
 If input value is specific set, then we one valid equivalence class & one invalid
 equivalence classes. 
 If input value is number, then we one valid equivalence class & two invalid equivalence
 classes. 
 If input value is Boolean, then we one valid equivalence class & one invalid
equivalence classes. 
Equivalence partitioning is a Test Case Design Technique to divide the input data of software
into different equivalence data classes. Test cases are designed for equivalence data class. The
equivalence partitions are frequently derived from the requirements specification for input data
that influence the processing of the test object. A use of this method reduces the time necessary
for testing software using less and effective test cases.

Equivalence Partitioning = Equivalence Class Partitioning = ECP

It can be used at any level of software for testing and is preferably a good technique to use first.
In this technique, only one condition to be tested from each partition. Because we assume that,
all the conditions in one partition behave in the same manner by the software. In a partition, if
one condition works other will definitely work. Likewise we assume that, if one of the condition
does not work then none of the conditions in that partition will work.

Equivalence partitioning is a testing technique where input values set into classes for testing.

 Valid Input Class = Keeps all valid inputs. 


 Invalid Input Class = Keeps all Invalid inputs. 

Example of Equivalence Class Partitioning?

 A text field permits only numeric characters 


 Length must be 6-10 characters long 

Partition according to the requirement should be like this:

While evaluating Equivalence partitioning, values in all partitions are equivalent that‟s why 0-
5 are equivalent, 6 – 10 are equivalent and 11- 14 are equivalent.

At the time of testing, test 4 and 12 as invalid values and 7 as valid one.
It is easy to test input ranges 6–10 but harder to test input ranges 2-600. Testing will be easy
in the case of lesser test cases but you should be very careful. Assuming, valid input is 7. That
means, you belief that the developer coded the correct valid range (6-10).

What is Boundary value analysis:

Boundary value analysis is a test case design technique to test boundary value between
partitions (both valid boundary partition and invalid boundary partition). A boundary value is an
input or output value on the border of an equivalence partition, includes minimum and
maximum values at inside and outside boundaries. Normally Boundary value analysis is part of
stress and negative testing.

Using Boundary Value Analysis technique tester creates test cases for required input field. For
example; an Address text box which allows maximum 500 characters. So, writing test cases for
each character once will be very difficult so that will choose boundary value analysis.

Example for Boundary Value Analysis:

Example 1

Suppose you have very important tool at office, accepts valid User Name and Password field to
work on that tool, and accepts minimum 8 characters and maximum 12 characters. Valid range
8-12, Invalid range 7 or less than 7 and invalid range 13 or more than 13.

Write Test Cases for Valid partition value, Invalid partition value and exact boundary value.

 Test Cases 1: Consider password length less than 8. 


 Test Cases 2: Consider password of length exactly 8. 
 Test Cases 3: Consider password of length between 9 and 11. 
 Test Cases 4: Consider password of length exactly 12. 
 Test Cases 5: Consider password of length more than 12. 
Example 2

Test cases for the application whose input box accepts numbers between 1-1000. Valid range
1-1000, Invalid range 0 and Invalid range 1001 or more.

Write Test Cases for Valid partition value, Invalid partition value and exact boundary value.

 Test Cases 1: Consider test data exactly as the input boundaries of input domain i.e. values 1
 and 1000. 
 Test Cases 2: Consider test data with values just below the extreme edges of input domains i.e.
 values 0 and 999. 
 Test Cases 3: Consider test data with values just above the extreme edges of input domain i.e.
values 2 and 1001. 

What is Orthogonal Array Testing Technique (OATS) Technique?

Combinational test technique, as the name suggests, is a technique of combining the data /
entities as input parameters for testing, to increase the scope. This technique is beneficial when
we have to test with huge number data having many permutations and combinations.

The beauty of this technique is that, it maximizes the coverage by comparatively lesser number
of test cases. The pairs of parameters which are identified should be independent of each other.
It‟s a black box technique, so like other BB technique; we don‟t need to have the implementation
knowledge of the system. The point here is to identify the correct pair of input parameters.

There are many technique of CTD, where OATS (Orthogonal array testing technique) is
widely used.
How to use OATS?

Implementing OATS technique involves the below steps:

1. Identify the independent variables. These will be referred to as “Factors”


2. Identify the values which each variable will take. These will be referred as “Levels”
3. Search for an orthogonal array that has all the factors from step 1 and all the levels from step
2 4. Map the factors and levels with your requirement
5. Translate them into the suitable test cases
6. Look out for the left over or special test cases (if any)

Let me demonstrate it with an example:

Orthogonal Array Testing Technique (OATS) example:

Let us consider you have to identify the test cases for a Web Page that has 5 sections: Headlines,
details, references and Comments, that can be displayed or not displayed or show Error message.
You are required to design the test condition to test the interaction between different sections.

In this case:

1. Number of independent variables (factors) are = 4

2. Value that each variable can take = 3values( displayed, not displayed and error message)
4
3. Orthogonal array would be 3 .

4. Google and find an appropriate array for 4 factors and 3 levels. For this example, I am
referencing the bellow table
5. Now, map this array with our requirements as below:

 1 will represent “Is Displayed” value 


 2 will represent “not displayed” value 
 3 will represent “error message value” 
 Factor A will represent “ Headlines” section 
 Factor B will represent “Details” section 
 Factor C will represent “references ”section 
 Factor D will represent “Comment” section. 
 Experiment no will represent “Test Cases #” 

6. After mapping, the table will look like:

Based on the table above, design your test cases. Also look out for the special test cases / left
over test cases.

Conclusion:

None of the testing technique provides a guarantee of 100% coverage. Each technique has its
own way of selecting the test conditions. In the similar lines, there are some limitations of using
this technique:

 Testing will fail if we fail to identify the good pairs. 



 Probability of not identifying the most important combination which can result in losing a
 defect. 
 This technique will fail if we do not know the interactions between the pairs. 
 Applying only this technique will not ensure the complete coverage. 
 It can find only those defects which arise due to pairs, as input parameters. 
TH
BCA 5 SEM
Unit 3 Software Testing

Format of test cases.

DEFINITION

A test case is a set of conditions or variables under which a tester will determine
whether a system under test satisfies requirements or works correctly.

The process of developing test cases can also help find problems in the
requirements or design of an application.

TEST CASE TEMPLATE

A test case can have the following elements. Note, however, that normally a test
management tool is used by companies and the format is determined by the tool
used.

Test Suite ID The ID of the test suite to which this test case belongs.
Test Case ID The ID of the test case.
Test Case
The summary / objective of the test case.
Summary
Related
The ID of the requirement this test case relates/traces to.
Requirement
Any prerequisites or preconditions that must be fulfilled prior to
Prerequisites
executing the test.
Test Procedure Step-by-step procedure to execute the test.
The test data, or links to the test data, that are to be used while
Test Data
conducting the test.
Expected Result The expected result of the test.
Actual Result The actual result of the test; to be filled after executing the test.
Pass or Fail. Other statuses can be ‘Not Executed’ if testing is not
Status
performed and ‘Blocked’ if testing is blocked.
Remarks Any comments on the test case or test execution.
Created By The name of the author of the test case.
Date of
The date of creation of the test case.
Creation
Executed By The name of the person who executed the test.
Date of
The date of execution of the test.
Execution
Test The environment (Hardware/Software/Network) in which the test
Environment was executed.

TEST CASE EXAMPLE / TEST CASE SAMPLE

Test Suite ID TS001


Test Case ID TC001
Test Case
To verify that clicking the Generate Coin button generates coins.
Summary
Related
RS001
Requirement
1. User is authorized.
Prerequisites 2. Coin balance is available.
1. Select the coin denomination in the Denomination field.

Test Procedure 2. Enter the number of coins in the Quantity field.


3. Click Generate Coin.

1. Denominations: 0.05, 0.10, 0.25, 0.50, 1, 2, 5


Test Data 2. Quantities: 0, 1, 5, 10, 20

1. Coin of the specified denomination should be produced if


the specified Quantity is valid (1, 5)
Expected Result 2. A message ‘Please enter a valid quantity between 1 and 10′
should be displayed if the specified quantity is invalid.

1. If the specified quantity is valid, the result is as expected.


2. If the specified quantity is invalid, nothing happens; the
Actual Result
expected message is not displayed

Status Fail
Remarks This is a sample test case.
Created By John Doe
Date of
01/14/2020
Creation
Executed By Jane Roe
Date of
02/16/2020
Execution

Test  OS: Windows Y

Environment  Browser: Chrome N


WRITING GOOD TEST CASES

 As far as possible, write test cases in such a way that you test only one thing
at a time. Do not overlap or complicate test cases. Attempt to make your test
cases ‘atomic’. 

 Ensure that all positive scenarios and negative scenarios are covered. 

 Language: 
o Write in simple and easy to understand language.
o Use active voice: Do this, do that.
Use exact and consistent names (of forms, fields, etc).
Characteristics of a good test case: 

Accurate: Exacts the purpose.
o Economical: No unnecessary steps or words.
o Traceable: Capable of being traced to requirements.
o Repeatable: Can be used to perform the test over and over.
o Reusable: Can be reused if necessary.

Test Strategy

The first step in planning white box testing is to develop a test strategy based on
risk analysis. The purpose of a test strategy is to clarify the major activities
involved, key decisions made, and challenges faced in the testing effort. This
includes identifying testing scope, testing techniques, coverage metrics, test
environment, and test staff skill requirements. The test strategy must account for
the fact that time and budget constraints prohibit testing every component of a
software system and should balance test effectiveness with test efficiency based on
risks to the system. The level of effectiveness necessary depends on the use of
software and its consequence of failure. The higher the cost of failure for software,
the more sophisticated and rigorous a testing approach must be to ensure
effectiveness. Risk analysis provides the right context and information to derive a
test strategy.

Test strategy is essentially a management activity. A test manager (or similar role)
is responsible for developing and managing a test strategy. The members of the
development and testing team help the test manager develop the test strategy.

Test Plan

The test plan should manifest the test strategy. The main purpose of having a test
plan is to organize the subsequent testing process. It includes test areas covered,
test technique implementation, test case and data selection, test results validation,
test cycles, and entry and exit criteria based on coverage metrics. In general, the
test plan should incorporate both a high-level outline of which areas are to be
tested and what methodologies are to be used and a general description of test
cases, including prerequisites, setup, execution, and a description of what to look
for in the test results. The high-level outline is useful for administration, planning,
and reporting, while the more detailed descriptions are meant to make the test
process go smoothly.

While not all testers like using test plans, they provide a number of benefits:

 Test plans provide a written record of what is to be done. 



 Test plans allow project stakeholders to sign off on the intended testing
effort. This helps ensure that the stakeholders agree with and will continue to
support the test effort. 
 Test plans provide a way to measure progress. This allows testers to
determine whether they are on schedule, and also provides a concise way to
report progress to the stakeholders. 

 Due to time and budget constraints, it is often impossible to test all
components of a software system. A test plan allows the analyst to
succinctly record what the testing priorities are. 

 Test plans provide excellent documentation for testing subsequent
releases—they can be used to develop regression test suites and/or provide
guidance to develop new tests. 

A test manager (or similar role) is responsible for developing and managing a test
plan. The development managers are also part of test plan development, since the
schedules in the test plan are closely tied to that of the development schedules.

Test Case Development

The last activity within the test planning stage is test case development. Test case
description includes preconditions, generic or specific test inputs, expected results,
and steps to perform to execute the test. There are many definitions and formats for
test case description. In general, the intent of the test case is to capture what the
particular test is designed to accomplish. Risk analysis, test strategy, and the test
plan should guide test case development. Testing techniques and classes of tests
applicable to white box testing are discussed later in Sections 4.2 and 4.3
respectively.

The members of the testing team are responsible for developing and documenting
test cases.
Test Automation

Test automation provides automated support for the process of managing and
executing tests, especially for repeating past tests. All the tests developed for the
system should be collected into a test suite. Whenever the system changes, the
suite of tests that correspond to the changes or those that represent a set of
regression tests can be run again to see if the software behaves as expected. Test
drivers or suite drivers support executing test suites. Test drivers basically help in
setup, execution, assertion, and teardown for each of the tests. In addition to
driving test execution, test automation requires some automated mechanisms to
generate test inputs and validate test results. The nature of test data generation and
test results validation largely depends on the software under test and on the testing
intentions of particular tests.

In addition to test automation development, stubs or scaffolding development is


also required. Test scaffolding provides some of the infrastructure needed in order
to test efficiently. White box testing mostly requires some software development to
support executing particular tests. This software establishes an environment around
the test, including states and values for data structures, runtime error injection, and
acts as stubs for some external components. Much of what scaffolding is for
depends on the software under test. However, as a best practice, it is preferable to
separate the test data inputs from the code that delivers them, typically by putting
inputs in one or more separate data files. This simplifies test maintenance and
allows for reuse of test code. The members of the testing team are responsible for
test automation and supporting software development. Typically, a member of the
test team is dedicated to the development effort.
Test Environment

Testing requires the existence of a test environment. Establishing and managing a


proper test environment is critical to the efficiency and effectiveness of testing. For
simple application programs, the test environment generally consists of a single
computer, but for enterprise-level software systems, the test environment is much
more complex, and the software is usually closely coupled to the environment.

For security testing, it is often necessary for the tester to have more control over
the environment than in many other testing activities. This is because the tester
must be able to examine and manipulate software/environment interactions at a
greater level of detail, in search of weaknesses that could be exploited by an
attacker. The tester must also be able to control these interactions. The test
environment should be isolated, especially if, for example, a test technique
produces potentially destructive results during test that might invalidate the results
of any concurrent test or other activity. Testing malware (malicious software) can
also be dangerous without strict isolation.

The test manager is responsible for coordinating test environment preparation.


Depending on the type of environment required, the members of the development,
testing, and build management teams are involved in test environment preparation.

Test Execution

Test execution involves running test cases developed for the system and reporting
test results. The first step in test execution is generally to validate the infrastructure
needed for running tests in the first place. This infrastructure primarily
encompasses the test environment and test automation, including stubs that might
be needed to run individual components, synthetic data used for testing or
populating databases that the software needs to run, and other applications that
interact with the software. The issues being sought are those that will prevent the
software under test from being executed or else cause it to fail for reasons not
related to faults in the software itself.

The members of the test team are responsible for test execution and reporting.

Testing Techniques

Data-Flow Analysis

Data-flow analysis can be used to increase program understanding and to develop


test cases based on data flow within the program. The data-flow testing technique
is based on investigating the ways values are associated with variables and the
ways that these associations affect the execution of the program. Data-flow
analysis focuses on occurrences of variables, following paths from the definition
(or initialization) of a variable to its uses. The variable values may be used for
computing values for defining other variables or used as predicate variables to
decide whether a predicate is true for traversing a specific execution path. A data-
flow analysis for an entire program involving all variables and traversing all usage
paths requires immense computational resources; however, this technique can be
applied for select variables. The simplest approach is to validate the usage of select
sets of variables by executing a path that starts with definition and ends at uses of
the definition. The path and the usage of the data can help in identifying suspicious
code blocks and in developing test cases to validate the runtime behavior of the
software. For example, for a chosen data definition-to-use path, with well-crafted
test data, testing can uncover time-of-check-to-time-of-use (TOCTTOU) flaws.
The ”Security Testing” section in [Howard 02] explains the data mutation
technique, which deals with perturbing environment data. The same technique can
be applied to internal data as well, with the help of data-flow analysis.

Code-Based Fault Injection

The fault injection technique perturbs program states by injecting software source
code to force changes into the state of the program as it executes. Instrumentation
is the process of non-intrusively inserting code into the software that is being
analyzed and then compiling and executing the modified (or instrumented)
software. Assertions are added to the code to raise a flag when a violation
condition is encountered. This form of testing measures how software behaves
when it is forced into anomalous circumstances. Basically this technique forces
non-normative behavior of the software, and the resulting understanding can help
determine whether a program has vulnerabilities that can lead to security
violations. This technique can be used to force error conditions to exercise the
error handling code, change execution paths, input unexpected (or abnormal) data,
change return values, etc. In [Thompson 02], runtime fault injection is explained
and advocated over code-based fault injection methods. One of the drawbacks of
code based methods listed in the book is the lack of access to source code.
However, in this content area, the assumptions are that source code is available and
that the testers have the knowledge and expertise to understand the code for
security implications. Refer to [Voas 98] for a detailed understanding of software
fault injection concepts, methods, and tools.

Abuse Cases

Abuse cases help security testers view the software under test in the same light as
attackers do. Abuse cases capture the non-normative behavior of the system. While
in [McGraw 04c] abuse cases are described more as a design analysis technique
than as a white box testing technique, the same technique can be used to develop
innovative and effective test cases mirroring the way attackers would view the
system. With access to the source code, a tester is in a better position to quickly see
where the weak spots are compared to an outside attacker. The abuse case can also
be applied to interactions between components within the system to capture
abnormal behavior, should a component misbehave. The technique can also be
used to validate design decisions and assumptions. The simplest, most practical
method for creating abuse cases is usually through a process of informed
brainstorming, involving security, reliability, and subject matter expertise. Known
attack patterns form a rich source for developing abuse cases.

Trust Boundaries Mapping

Defining zones of varying trust in an application helps identify vulnerable areas of


communication and possible attack paths for security violations. Certain
components of a system have trust relationships (sometimes implicit, sometime
explicit) with other parts of the system. Some of these trust relationships offer
”trust elevation” possibilities—that is, these components can escalate trust
privileges of a user when data or control flow cross internal boundaries from a
region of less trust to a region of more trust [Hoglund 04]. For systems that have n-
tier architecture or that rely on several third-party components, the potential for
missing trust validation checks is high, so drawing trust boundaries becomes
critical for such systems. Drawing clear boundaries of trust on component
interactions and identifying data validation points (or chokepoints, as described in
[Howard 02]) helps in validating those chokepoints and testing some of the design
assumptions behind trust relationships. Combining trust zone mapping with data-
flow analysis helps identify data that move from one trust zone to another and
whether data checkpoints are sufficient to prevent trust elevation possibilities. This
insight can be used to create effective test cases.

Code Coverage Analysis

Code coverage is an important type of test effectiveness measurement. Code


coverage is a way of determining which code statements or paths have been
exercised during testing. With respect to testing, coverage analysis helps in
identifying areas of code not exercised by a set of test cases. Alternatively,
coverage analysis can also help in identifying redundant test cases that do not
increase coverage. During ad hoc testing (testing performed without adhering to
any specific test approach or process), coverage analysis can greatly reduce the
time to determine the code paths exercised and thus improve understanding of code
behavior. There are various measures for coverage, such as path coverage, path
testing, statement coverage, multiple condition coverage, and function coverage.
When planning to use coverage analysis, establish the coverage measure and the
minimum percentage of coverage required. Many tools are available for code
coverage analysis. It is important to note that coverage analysis should be used to
measure test coverage and should not be used to create tests. After performing
coverage analysis, if certain code paths or statements were found to be not covered
by the tests, the questions to ask are whether the code path should be covered and
why the tests missed those paths. A risk-based approach should be employed to
decide whether additional tests are required. Covering all the code paths or
statements does not guarantee that the software does not have faults; however, the
missed code paths or statements should definitely be inspected. One obvious risk is
that unexercised code will include Trojan horse functionality, whereby seemingly
innocuous code can carry out an attack. Less obvious (but more pervasive) is the
risk that unexercised code has serious bugs that can be leveraged into a successful
attack [McGraw 02].

Classes of Tests

Creating security tests other than ones that directly map to security specifications is
challenging, especially tests that intend to exercise the non-normative or non-
functional behavior of the system. When creating such tests, it is helpful to view
the software under test from multiple angles, including the data the system is
handling, the environment the system will be operating in, the users of the software
(including software components), the options available to configure the system,
and the error handling behavior of the system. There is an obvious interaction and
overlap between the different views; however, treating each one with specific
focus provides a unique perspective that is very helpful in developing effective
tests.

Data

All input data should be untrusted until proven otherwise, and all data must be
validated as it crosses the boundary between trusted and untrusted environments
[Howard 02]. Data sensitivity/criticality plays a big role in data-based testing;
however, this does not imply that other data can be ignored—non-sensitive data
could allow a hacker to control a system. When creating tests, it is important to test
and observe the validity of data at different points in the software. Tests based on
data and data flow should explore incorrectly formed data and stressing the size of
the data. The section ”Attacking with Data Mutation” in [Howard 02] describes
different properties of data and how to mutate data based on given properties. To
understand different attack patterns relevant to program input, refer to chapter six,
”Crafting (Malicious) Input,” in [Hoglund 04]. Tests should validate data from all
channels, including web inputs, databases, networks, file systems, and environment
variables. Risk analysis should guide the selection of tests and the data set to be
exercised.

Fuzzing

Although normally associated exclusively with black box security testing, fuzzing
can also provide value in a white box testing program. Specifically, [Howard 06]
introduced the concept of “smart fuzzing.” Indeed, a rigorous testing program
involving smart fuzzing can be quite similar to the sorts of data testing scenarios
presented above and can produce useful and meaningful results as well. [Howard
06] claims that Microsoft finds some 25-25 percent of the bugs in their code via
fuzzing techniques. Although much of that is no doubt “dumb” fuzzing in black
box tests, “smart” fuzzing should also be strongly considered in a white box testing
program.

Environment

Software can only be considered secure if it behaves securely under all operating
environments. The environment includes other systems, users, hardware, resources,
networks, etc. A common cause of software field failure is miscommunication
between the software and its environment [Whittaker 02]. Understanding the
environment in which the software operates, and the interactions between the
software and its environment, helps in uncovering vulnerable areas of the system.
Understanding dependency on external resources (memory, network bandwidth,
databases, etc.) helps in exploring the behavior of the software under different
stress conditions. Another common source of input to programs is environment
variables. If the environment variables can be manipulated, then they can have
security implications. Similar conditions occur for registry information,
configuration files, and property files. In general, analyzing entities outside the
direct control of the system provides good insights in developing tests to ensure the
robustness of the software under test, given the dependencies.

Component Interfaces

Applications usually communicate with other software systems. Within an


application, components interface with each other to provide services and
exchange data. Common causes of failure at interfaces are misunderstanding of
data usage, data lengths, data validation, assumptions, trust relationships, etc.
Understanding the interfaces exposed by components is essential in exposing
security bugs hidden in the interactions between components. The need for such
understanding and testing becomes paramount when third-party software is used or
when the source code is not available for a particular component. Another
important benefit of understanding component interfaces is validation of principles
of compartmentalization. The basic idea behind compartmentalization is to
minimize the amount of damage that can be done to a system by breaking up the
system into as few units as possible while still isolating code that has security
privileges [McGraw 02]. Test cases can be developed to validate
compartmentalization and to explore failure behavior of components in the event
of security violations and how the failure affects other components.

Configuration

In many cases, software comes with various parameters set by default, possibly
with no regard for security. Often, functional testing is performed only with the
default settings, thus leaving sections of code related to non-default settings
untested. Two main concerns with configuration parameters with respect to
security are storing sensitive data in configuration files and configuration
parameters changing the flow of execution paths. For example, user privileges,
user roles, or user passwords are stored in the configuration files, which could be
manipulated to elevate privilege, change roles, or access the system as a valid user.
Configuration settings that change the path of execution could exercise vulnerable
code sections that were not developed with security in mind. The change of flow
also applies to cases where the settings are changed from one security level to
another, where the code sections are developed with security in mind. For example,
changing an endpoint from requiring authentication to not requiring authentication
means the endpoint can be accessed by everyone. When a system has multiple
configurable options, testing all combinations of configuration can be time
consuming; however, with access to source code, a risk-based approach can help in
selecting combinations that have higher probability in exposing security violations.
In addition, coverage analysis should aid in determining gaps in test coverage of
code paths.

Error handling

The most neglected code paths during the testing process are error handling
routines. Error handling in this paper includes exception handling, error recovery,
and fault tolerance routines. Functionality tests are normally geared towards
validating requirements, which generally do not describe negative (or error)
scenarios. Even when negative functional tests are created, they don’t test for non-
normative behavior or extreme error conditions, which can have security
implications. For example, functional stress testing is not performed with an
objective to break the system to expose security vulnerability. Validating the error
handling behavior of the system is critical during security testing, especially
subjecting the system to unusual and unexpected error conditions. Unusual errors
are those that have a low probability of occurrence during normal usage.
Unexpected errors are those that are not explicitly specified in the design
specification, and the developers did not think of handling the error. For example,
a system call may throw an ”unable to load library” error, which may not be
explicitly listed in the design documentation as an error to be handled. All aspects
of error handling should be verified and validated, including error propagation,
error observability, and error recovery. Error propagation is how the errors are
propagated through the call chain. Error observability is how the error is identified
and what parameters are passed as error messages. Error recovery is getting back
to a state conforming to specifications. For example, return codes for errors may
not be checked, leading to uninitialized variables and garbage data in buffers; if the
memory is manipulated before causing a failure, the uninitialized memory may
contain attacker-supplied data. Another common mistake to look for is when
sensitive information is included as part of the error messages.

Tools

Source Code Analysis

Source code analysis is the process of checking source code for coding problems
based on a fixed set of patterns or rules that might indicate possible security
vulnerabilities. Static analysis tools scan the source code and automatically detect
errors that typically pass through compilers and become latent problems. The
strength of static analysis depends on the fixed set of patterns or rules used by the
tool; static analysis does not find all security issues. The output of the static
analysis tool still requires human evaluation. For white box testing, the results of
the source code analysis provide a useful insight into the security posture of the
application and aid in test case development. White box testing should verify that
the potential vulnerabilities uncovered by the static tool do not lead to security
violations. Some static analysis tools provide data-flow and control-flow analysis
support, which are useful during test case development.

A detailed discussion about the analysis or the tools is outside the scope of this
content area. This section addresses how source code analysis tools aid in white
box testing. For a more detailed discussion on the analysis and the tools, refer to
the Source Code Analysis content area.

Program Understanding Tools

In general, white box testers should have access to the same tools, documentation,
and environment as the developers and functional testers on the project do. In
addition, tools that aid in program understanding, such as software visualization,
code navigation, debugging, and disassembly tools, greatly enhance productivity
during testing.

Coverage Analysis

Code coverage tools measure how thoroughly tests exercise programs. There are
many different coverage measures, including statement coverage, branch coverage,
and multiple-condition coverage. The coverage tool would modify the code to
record the statements executed and which expressions evaluate which way (the true
case or the false case of the expression). Modification is done either to the source
code or to the executable the compiler generates. There are several commercial and
freeware coverage tools available. Coverage tool selection should be based on the
type of coverage measure selected, the language of the source code, the size of the
program, and the ease of integration into the build process.

Profiling

Profiling allows testers to learn where the software under test is spending most of
its time and the sequence of function calls as well. This information can show
which pieces of the software are slower than expected and which functions are
called more often than expected. From a security testing perspective, the
knowledge of performance bottlenecks help uncover vulnerable areas that are not
apparent during static analysis. The call graph produced by the profiling tool is
helpful in program understanding. Certain profiling tools also detect memory leaks
and memory access errors (potential sources of security violations). In general, the
functional testing team or the development team should have access to the
profiling tool, and the security testers should use the same tool to understand the
dynamic behavior of the software under test.

Data flow testing can be considered to be a form of structural testing: in contrast to


functional testing, where the program can be tested without any knowledge of its
internal structures, structural testing techniques require the tester to have access to
details of the program’s structure. Data flow testing focuses on the variables used
within a program. Variables are defined and used at different points within the
program; data flow testing allows the tester to chart the changing values of
variables within the program. It does this by utilising the concept of a program
graph: in this respect, it is closely related to path testing, however the paths are
selected on variables. There are two major forms of data flow testing: the first,
called define/use testing, uses a number of simple rules and test coverage metrics;
the second uses “program slices” – segments of a program. The importance of
analysing the use of variables in programs has been recognised for a long time.
Compilers for languages such as COBOL introduced a feature in which Variables
have been seen as the main areas where a program can be tested structurally. Early
methods of data testing involved static analysis: the compiler produces a list of
lines at which variables are defined or used. The term static analysis refers to the
fact that the tester does not have to run the program to analyse it. Static analysis
allows the tester, according to Jorgensen, to focus on three “define/reference
anomalies” “A variable that is defined but never used (referenced). A variable that
is used but never defined.
Debugging vs. Testing

Debugging is the process of finding errors in a program under development that is


not thought to be correct. Testing is the process of attempting to find errors in a
program that is thought
to be correct. Testing attempts to establish that a program satisfies its Specification
Testing can establish the presence of errors but cannot guarantee their
absence (E.W. Dijkstra)
Exhaustive testing is not possible for real programs due to combinatorial
explosion of possible test cases. Amount of testing performed must be balanced
against the cost of undiscovered errors Regression Testing is used to compare a
modified version of a program against a previous version.
Testing & Bad Attitude
The software testers job is to find as many errors as possible in the program under
test with the least effort expended Productivity measure is errors discovered per
hour of testing Testers attitude should be and not

What can I do to make this program look good? Don't let developer's ego stand in
the way of vigorous testing. Test case selection is one key factor in successful
testing Insight and imagination are essential in the design of good test cases.
Testers should be independent from implementors to eliminate oversights due to
propagated misconceptions.
Levels of Program Correctness
1. No syntax errors [compilers and strong-typed programming languages]
2. No semantic errors
3. There exists some test data for which program gives a correct answer .
4. Program gives correct answer for reasonable or random test data .
5. Program gives correct answer for difficult test data
6. For all legal input data the program gives the correct answer .
7. For all legal input and all likely erroneous input the program gives a
correct or reasonable answer

8. For all input the program gives a correct or reasonable answer


Causes of Faults During Development
Requirements Incorrect, missing or unclear requirements.
Specification Incorrect translation to design
Design Incorrect or unclear specification.
Implementation Misinterpretation of design Incorrect
documentation
Misinterpretation of programming language semantics.
New faults introduced by fault repair.
Maintenance Incorrect documentation.
New faults introduced by fault repair.
Changes in requirements.
Types of Faults
_ Algorithmic faults - incorrect algorithm

_ Language usage faults - misunderstand/misuse language.

_ Computation and precision faults.

_ Documentation faults.

_ Stress or overload faults.

_ Capacity or boundary faults.

_ Throughput or performance faults.

_ Recovery faults.

_ Hardware and system software faults.

Testing should be a Planned Activity

_Testing should be planned for during Requirements Definition and Specification.


Allocate human and computer resources for testing_ May need to design hooks
for testing into the software
May need to develop testing tools, test drivers, databases of test data, etc.

Testability should be one of the requirements of every software system.


A Test Plan is usually developed to describe the testing process in detail._
Testing must be planned for in the overall project schedule
Allow time for fixing problems discovered during testing
Testing activity should be traceable back to requirements and specification.
Testing Documentation
Testing requirements and Test Specification are developed during the
requirements and specification phases of the system design.
_ Test Plan describes the sequence of testing activities and describes the testing
software that will be constructed
_ Test Procedure specifies the order of system integration and modules to be
tested. It also describes unit tests and test data
_ Logging of tests conducted and archiving of test results is often an important part
of the Test Procedure

Test Plan Components


Establish test objectives

Designing test cases

Writing test cases

Testing test cases

Executing tests

Evaluating test results


Testing Coverage
One way to assess the thoroughness of testing is to look at the testing coverage.
Testing coverage measures what fraction of the program has actually been
exercised by a suite of tests.
The ideal situation is for 100% of the program to be covered by a test suite.
In practice 100% coverage is very hard to achieve.
In analyzing test coverage we think in terms of the programs control flow

graph which consists of a large number of nodes (basic blocks, serial


computation) connected by edges that represent the branching and functioncall
structure of the program.
Coverage can be analyzed in terms of program control flow or in terms of
dataflow.
Automated tools can be used to estimate the coverage produced by a test suite.
Control Flow based Coverage
Statement coverage - every statement (i.e. all nodes in the programs control
flow graph) is executed at least once.
All-Paths coverage - every possible control flow path through the program is
traversed at least once. Equivalent to an exhaustive test of the program.
Branch coverage - for all decision points (e.g. if and switch) every possible
branch is taken at least once.
Multiple-predicate coverage Boolean expressions may contain embedded
branching (e.g. A && ( B || C ) ). Multiple-predicate coverage requires testing
each Boolean expression for all possible combinations of the elementary
predicates.
Cyclomatic number coverage All linearly independent paths through the control
flow graph are tested.

Data Flow Based Coverage

All-uses coverage Traverse at least once every definition-free path from every
definition to all P-use or C-use of that definition.

All-DU-Path coverage All-uses coverage plus the constraint that every


definition clear path is either cycle-free or a simple cycle.

All-defs coverage Test so that each definition be used at least once.

All-C-uses/Some P-uses coverage Test so that all definition free paths from each
definition to all C-uses are tested. If a definition is used only in predicates, test at
least
one P-use.

All-P-uses/Some C-uses coverage Test so that all definition free paths from each
definition to all P-uses are tested. If a definition is used only in computations,
test at
least one C-use.

All-P-uses coverage Test so that all definition free paths from every definition to
every

P-use is tested.
Error Based Testing

_ This testing technique is based on assumptions about probable types of errors


in program.

_ Historical experience tells us that many program errors occur at boundary


points between different domains

– input domains

– internal domains (processing cases)

– Output domains

_ Error based testing tries to identify such domains and generate tests on both sides
of every boundary.

Sources for Test Cases

_ Requirements and Specification Documents

_ General knowledge of the application area

_ Program design and user documentation


_ Specific knowledge of the program source code

_ Specific knowledge of the programming language and implementation


techniques

_ Test at and near (inside and outside) the boundaries of the programs
applicability

_ Test with random data

_ Test for response to probable errors (e.g. invalid input data)

_ Think nasty when designing test cases.

Try to destroy the program with your test data

UNIT 4 SOFTWARE TESTING BCA 5TH SEM

SOFTWARE VERIFICATION AND VALIDATION


INTRODUCTION

Software verification and validation activities check the software against its
specifications. Every project must verify and validate the software it produces.
This is done by:

· checking that each software item meets specified requirements;

· checking each software item before it is used as an input to another


activity;

· ensuring that checks on each software item are done, as far as


possible, by someone other than the author;

· ensuring that the amount of verification and validation effort is


adequate to show each software item is suitable for operational use.

Project management is responsible for organising software verification and


validation activities, the definition of software verification and validation roles
(e.g. review team leaders), and the allocation of staff to those roles.

Whatever the size of project, software verification and validation greatly affects
software quality. People are not infallible, and software that has not been verified
has little chance of working. Typically, 20 to 50 errors per 1000 lines of code are
found during development, and 1.5 to 4 per 1000 lines of code remain even after
system testing [Ref 20]. Each of these errors could lead to an operational failure or
non-compliance with a requirement. The objective of software verification and
validation is to reduce software errors to an acceptable level. The effort needed can
range from 30% to 90% of the total project resources, depending upon the
criticality and complexity of the software

2.2 PRINCIPLES OF SOFTWARE VERIFICATION AND


VALIDATION
Verification can mean the:

act of reviewing, inspecting, testing, checking, auditing, or otherwise


establishing and documenting whether items, processes, services or
documents conform to specified requirements [Ref.5];

· process of evaluating a system or component to determine whether


the products of a given development phase satisfy the conditions
imposed at the start of the phase [Ref. 6]

· formal proof of program correctness [Ref.6].

The first definition of verification in the list above is the most general and includes
the other two. In ESA PSS-05-0, the first definition applies.

Validation is, according to its ANSI/IEEE definition, 'the process of evaluating a


system or component during or at the end of the development process to determine
whether it satisfies specified requirements'. Validation is, therefore, 'end-to-end'
verification.

Verification activities include:

· technical reviews, walkthroughs and software inspections;

· checking that software requirements are traceable to user


requirements;

· checking that design components are traceable to software


requirements;

· unit testing;

· integration testing;

· system testing;

· acceptance testing;

· audit.

Verification activities may include carrying out formal proofs.

The activities to be conducted in a project are described in the Software


Verification and Validation Plan (SVVP).
2.3 REVIEWS

A review is 'a process or meeting during which a work product,


or set of work products, is presented to project personnel, managers,
users, customers, or other interested parties for comment or approval'
[Ref 6].

Reviews may be formal or informal. Formal reviews have


explicit and definite rules of procedure. Informal reviews have no
predefined procedures. Although informal reviews can be very useful for
educating project members and solving problems, this section is only
concerned with reviews that have set procedures, i.e. formal reviews.

Three kinds of formal review are normally used for software verification:

· technical review;

· walkthrough;

· audits.

These reviews are all 'formal reviews' in the sense that all have specific objectives
and procedures. They seek to identify defects and discrepancies of the software
against specifications, plans and standards.
Software inspections are a more rigorous alternative to walkthroughs, and are
strongly recommended for software with stringent reliability, security and safety
requirements. Methods for software inspections are described in Section 3.2.

The software problem reporting procedure and document change procedure


defined in Part 2, Section 3.2.3.2 of ESA PSS-05-0, and in more detail ESA PSS-
05-09 'Guide to Software Configuration Management', calls for a formal review
process for all changes to code and documentation. Any of the first two kinds of
formal review procedure can be applied for change control. The SRB, for example,
may choose to hold a technical review or walkthrough as necessary.

2.3.1 Technical reviews

Technical reviews evaluate specific software elements to verify progress against


the plan. The technical review process should be used for the UR/R, SR/R, AD/R,
DD/R and any critical design reviews.

The UR/R, SR/R, AD/R and DD/R are formal reviews held at the end of a phase to
evaluate the products of the phase, and to decide whether the next phase may be
started (UR08, SR09, AD16 and DD11).

Critical design reviews are held in the DD phase to review the detailed design of a
major component to certify its readiness for implementation (DD10).

2.3.1.1 Objectives

The objective of a technical review is to evaluate a specific set of


review items (e.g. document, source module) and provide management
with evidence that:

· they conform to specifications made in previous phases;

· they have been produced according to the project standards and


procedures;

· any changes have been properly implemented, and affect only those
systems identified by the change specification (described in a RID,
DCR or SCR).

2.3.1.2 Organisation

The technical review process is carried out by a review team,


which is made up of:

· a leader;

· a secretary;

· members.

In large and/or critical projects, the review team may be split into a review board
and a technical panel. The technical panel is usually responsible for processing
RIDs and the technical assessment of review items, producing as output a technical
panel report. The review board oversees the review procedures and then
independently assesses the status of the review items based upon the technical
panel report.

The review team members should have skills to cover all aspects of the review
items. Depending upon the phase, the review team may be drawn from:

· users;

· software project managers;

· software engineers;

· software librarians;

· software quality assurance staff;

· independent software verification and validation staff;

· independent experts not involved in the software development.

Some continuity of representation should be provided to ensure consistency.

The leader's responsibilities include:

· nominating the review team;


· organising the review and informing all participants of its date, place
and agenda;

· distribution of the review items to all participants before the meeting;

· organising as necessary the work of the review team;

· chairing the review meetings;

· issuing the technical review report.


The secretary will assist the leader as necessary and will be responsible for
documenting the findings, decisions and recommendations of the review team.

Team members examine the review items and attend review meetings. If the
review items are large, complex, or require a range of specialist skills for effective
review, the leader may share the review items among members.

2.3.1.3 Input

Input to the technical review process includes as appropriate:

· a review meeting agenda;

· a statement of objectives;

· the review items;


· specifications for the review items;
· plans, standards and guidelines that apply to the review items;

· RID, SPR and SCR forms concerning the review items;

· marked up copies of the review items;

· reports of software quality assurance staff.

2.3.1.4 Activities

The technical review process consists of the following activities:

preparation;

review meeting.

The review process may start when the leader considers the review items to be
stable and complete. Obvious signs of instability are the presence of TBDs or of
changes recommended at an earlier review meeting not yet implemented.

Adequate time should be allowed for the review process. This depends on the size
of project. A typical schedule for a large project (20 man years or more) is shown
in Table 2.3.1.4.
Event Time

Review items distributed R - 20 days

RIDs categorised and distributed R - 10 days

Review Meeting R

Issue of Report R + 20 days

Table 2.3.1.4: Review Schedule for a large project

Members may have to combine their review activities with other commitments,
and the review schedule should reflect this.

2.3.1.4.1 Preparation

The leader creates the agenda and distributes it, with the statements of objectives,
review items, specifications, plans, standards and guidelines (as appropriate) to the
review team.

Members then examine the review items. Each problem is recorded by completing
boxes one to six of the RID form. A RID should record only one problem, or group
of related problems. Members then pass their RIDs to the secretary, who numbers
each RID uniquely and forwards them to the author for comment. Authors add
their responses in box seven and then return the RIDs to the secretary.
The leader then categorises each RID as major, minor, or editorial. Major RIDs
relate to a problem that would affect capabilities, performance, quality, schedule
and cost. Minor RIDs request clarification on specific points and point out
inconsistencies. Editorial RIDs point out defects in format, spelling and grammar.
Several hundred RIDs can be generated in a large project review, and classification
is essential if the RIDs are to be dealt with efficiently. Failure to categorise the
RIDs can result in long meetings that concentrate on minor problems at the
expense of major ones.

Finally the secretary sorts the RIDs in order of the position of the discrepancy in
the review item. The RIDs are now ready for input to the review meeting.

Preparation for a Software Review Board follows a similar pattern, with RIDs
being replaced by SPRs and SCRs.

2.3.1.4.2 Review meeting

A typical review meeting agenda consists of:

Introduction;

Presentation of the review items;

Classification of RIDs;
Review of the major RIDs;

Review of the other RIDs;

Conclusion.
The introduction includes agreeing the agenda, approving the report of any
previous meetings and reviewing the status of outstanding actions.

After the preliminaries, authors present an overview of the review items. If this is
not the first meeting, emphasis should be given to any changes made since the
items were last discussed.

The leader then summarises the classification of the RIDs. Members may request
that RIDs be reclassified (e.g. the severity of a RID may be changed from minor to
major). RIDs that originate during the meeting should be held over for decision at
a later meeting, to allow time for authors to respond.

Major RIDs are then discussed, followed by the minor and editorial RIDs. The
outcome of the discussion of any defects should be noted by the secretary in the
review decision box of the RID form. This may be one of CLOSE, UPDATE,
ACTION or REJECT. The reason for each decision should be recorded. Closure
should be associated with the successful completion of an update. The nature of an
update should be agreed. Actions should be properly formulated, the person
responsible identified, and the completion date specified. Rejection is equivalent to
closing a RID with no action or update.
The conclusions of a review meeting should be agreed during the meeting. Typical
conclusions are:

authorisation to proceed to the next phase, subject to updates and


actions being completed;

authorisation to proceed with a restricted part of the system;

a decision to perform additional work.

One or more of the above may be applicable.

If the review meeting cannot reach a consensus on RID


dispositions and conclusions, possible actions are:

recording a minority opinion in the review report;

for one or more members to find a solution outside the meeting;

referring the problem to the next level of management.

Output
The output from the review is a technical review report that should contain the
following:

· abstract of the report;


· a list of the members;

· an identification of the review items;

· tables of RIDs, SPRs and SCRs organised according to category,


with dispositions marked;

· a list of actions, with persons responsible identified and expected


dates for completion defined;

This output can take the form of the minutes of the meeting, or be a self-standing
report. If there are several meetings, the collections of minutes can form the report,
or the minutes can be appended to a report summarising the findings. The report
should be detailed enough for management to judge what happened. If there have
been difficulties in reaching consensus during the review, it is advisable that the
output be signed off by members.

2.3.2 Walkthroughs

Walkthroughs should be used for the early evaluation of documents, models,


designs and code in the SR, AD and DD phases.
2.3.2.1 Objectives

The objective of a walkthrough is to evaluate a specific software element (e.g.


document, source module). A walkthrough should attempt to identify defects and
consider possible solutions. In contrast with other forms of review, secondary
objectives are to educate, and to resolve stylistic problems.

2.3.2.2 Organisation

The walkthrough process is carried out by a walkthrough team, which is made up


of:

· a leader;

· a secretary;

· the author (or authors);

· members.

The leader, helped by the secretary, is responsible for management tasks associated
with the walkthrough. The specific responsibilities of the leader include:

3. nominating the walkthrough team;

4. organising the walkthrough and informing all participants of the


date, place and agenda of walkthrough meetings;
5. distribution of the review items to all participants before
walkthrough meetings;

6. organising as necessary the work of the walkthrough team;

7. chairing the walkthrough meeting;

8. issuing the walkthrough report.

The author is responsible for the production of the review items,


and for presenting them at the walkthrough meeting.

Members examine review items, report errors and recommend


solutions.
6. Input

Input to the walkthrough consists of:

3. a statement of objectives in the form of an agenda;

4. the review items;

5. standards that apply to the review items;

6. specifications that apply to the review items.


5. Activities

The walkthrough process consists of the following activities:

preparation;

review meeting.

Preparation

The moderator or author distributes the review items when the


author decides that they are ready for walkthrough. Members should
examine the review items prior to the meeting. Concerns should be noted
on RID forms so that they can be raised at the appropriate point in the
walkthrough meeting.

2.3.2.4.2 Review meeting

The review meeting begins with a discussion of the agenda and


the report of the previous meeting. The author then provides an overview
of the review items.

A general discussion follows, during which issues of the


structure, function and scope of the review items should be raised.

The author then steps through the review items, such as


documents and source modules (in contrast technical reviews step
through RIDs, not the items themselves). Members raise issues about
specific points as they are reached in the walkthrough.

As the walkthrough proceeds, errors, suggested changes and


improvements are noted on RID forms by the secretary.

2.3.2.5 Output

The output from the walkthrough is a walkthrough report that


should contain the following:

· a list of the members;

· an identification of the review items;

· a list of changes and defects noted during the walkthrough;

· completed RID forms;

· a list of actions, with persons responsible identified and expected


dates for completion defined;

· recommendations made by the walkthrough team on how to remedy


defects and dispose of unresolved issues (e.g. further walkthrough
meetings).
This output can take the form of the minutes of the meeting, or
be a self-standing report.

2.3.3 Audits

Audits are independent reviews that assess compliance with


software requirements, specifications, baselines, standards, procedures,
instructions, codes and contractual and licensing requirements. To ensure
their objectivity, audits should be carried out by people independent of
the development team. The audited organisation should make resources
(e.g. development team members, office space) available to support the
audit.

A 'physical audit' checks that all items identified as part of the


configuration are present in the product baseline. A 'functional audit'
checks that unit, integration and system tests have been carried out and
records their success or failure. Other types of audits may examine any
part of the software development process, and take their name from the
part of the process being examined, e.g. a 'code audit' checks code
against coding standards.

Audits may be routine or non-routine. Examples of routine audits


are the functional and physical audits that must be performed before the
release of the software (SVV03). Non-routine audits may be initiated by
the organisation receiving the software, or management and quality
assurance personnel in the organisation producing the software.
The following sections describe the audit process, and are based upon the
ANSI/IEEE Std 1028-1988, 'IEEE Standard for Software Reviews and Audits'
[Ref 10].

2.3.3.1 Objectives

The objective of an audit is to verify that software products and


processes comply with standards, guidelines, specifications and
procedures.

2.3.3.2 Organisation

The audit process is carried out by an audit team, which is made up

of:

· a leader;

· members.

The leader is responsible for administrative tasks associated with


the audit. The specific responsibilities of the leader include:

· nominating the audit team;


· organising the audit and informing all participants of the schedule of
activities;

· issuing the audit report.

Members interview the development team, examine review


items, report errors and recommend solutions.

2.3.3.3 Input

The following items should be input to an audit:

· terms of reference defining the purpose and scope of the audit;

· criteria for deciding the correctness of products and processes such


as contracts, plans, specifications, procedures, guidelines and
standards;

· software products;

· software process records;

· management plans defining the organisation of the project being


audited.
2.3.3.4 Activities

The team formed to carry out the audit should produce a plan
that defines the:

· products or processes to be examined;

· schedule of audit activities;

· sampling criteria, if a statistical approach is being used;

· criteria for judging correctness (e.g. the SCM procedures might be


audited against the SCMP);

· checklists defining aspects to be audited;

· audit staffing plan;

· date, time and place of the audit kick-off meeting.

The audit team should prepare for the audit by familiarising


themselves with the organisation being audited, its products and its
processes. All the team must understand the audit criteria and know how
to apply them. Training may be necessary.

The audit team then examines the software products and


processes, interviewing project team members as necessary. This is the
primary activity in any audit. Project team members should co-operate
fully with the auditors. Auditors should fully investigate all problems,
document them, and make recommendations about how to rectify them.
If the system is very large, the audit team may have to employ a
sampling approach.

When their investigations are complete, the audit team should


issue a draft report for comment by the audited organisation, so that any
misunderstandings can be eliminated. After receiving the audited
organisation's comments, the audit team should produce a final report. A
follow-up audit may be required to check that actions are implemented.

2.3.3.5 Output

The output from an audit is an audit report that:

· identifies the organisation being audited, the audit team, and the date
and place of the audit;

· defines the products and processes being audited;

· defines the scope of the audit, particularly the audit criteria for
products and processes being audited;

· states conclusions;

· makes recommendations;
· lists actions.

2.4 TRACING
Tracing is 'the act of establishing a relationship between two or more products of
the development process; for example, to establish the relationship between a
given requirement and the design element that implements that requirement' [Ref
6]. There are two kinds of traceability:

· forward traceability;

· backward traceability.

Forward traceability requires that each input to a phase must be traceable to an


output of that phase (SVV01). Forward traceability shows completeness, and is
normally done by constructing traceability matrices . These are normally
implemented by tabulating the correspondence between input and output (see the
example in ESA PSS-05-03, Guide to Software Requirements Definition [Ref 2]).
Missing entries in the matrix display incompleteness quite vividly. Forward
traceability can also show duplication. Inputs that trace to more than one output
may be a sign of duplication
Backward traceability requires that each output of a phase must be traceable to an
input to that phase (SVV02). Outputs that cannot be traced to inputs are
superfluous, unless it is acknowledged that the inputs themselves were incomplete.
Backward tracing is normally done by including with each item a statement of why
it exists (e.g. source of a software requirement, requirements for a software
component).
During the software life cycle it is necessary to trace:

· user requirements to software requirements and vice-versa;

· software requirements to component descriptions and vice versa;

· integration tests to architectural units and vice-versa;

· unit tests to the modules of the detailed design;

· system tests to software requirements and vice-versa;

· acceptance tests to user requirements and vice-versa.

To support traceability, all components and requirements are


identified. The SVVP should define how tracing is to be done.
References to components and requirements should include identifiers.
The SCMP defines the identification conventions for documents and
software components. The SVVP should define additional identification
conventions to be used within documents (e.g. requirements) and
software components.

2.5 FORMAL PROOF


Formal proof attempts to demonstrate logically that software is correct. Whereas a
test empirically demonstrates that specific inputs result in specific outputs, formal
proofs logically demonstrate that all inputs meeting defined preconditions will
result in defined postconditions being met.

Where practical, formal proof of the correctness of software may be attempted.


Formal proof techniques are often difficult to justify because of the additional
effort required above the necessary verification techniques of reviewing, tracing
and testing.

The difficulty of expressing software requirements and designs in the


mathematical form necessary for formal proof has prevented the wide application
of the technique. Some areas where formal methods have been successful are for
the specification and verification of:

· protocols;

· secure systems.

Good protocols and very secure systems depend upon having precise, logical
specifications with no loopholes.

Ideally, if formal techniques can prove that software is correct,


separate verification (e.g. testing) should not be necessary. However,
human errors in proofs are still possible, and ways should be sought to
avoid them, for example by ensuring that all proofs are checked
independently.
Design Verification vs Design Validation

Probably the most misunderstood concept in the design requirements of ISO9001,


if not the entire standard, is the difference between Design Verification and Design
Validation. These two steps are distinctly different, and important in a good design
process. One step is used to make sure that the design has addressed every
requirement, while the other is used to prove that the design can meet the
requirements set out for it.

Design and Development Verification

Verification is strictly a paper exercise. It starts with taking all the design inputs:
specifications, government and industry regulations, knowledge taken from
previous designs, and any other information necessary for proper function. With
these requirements in hand you compare to your design outputs: drawings,
assembly instructions, test instructions, and electronic design files.

In the comparison you are ensuring that each requirement in the inputs is
accounted for in the outputs. Is each required test called out in the test instructions,
including the correct pass/fail criteria for each test? Are all product acceptance
criteria correct? Are all physical characteristics identified in the build instructions?

The output of this verification review is often recorded in a Statement of


Compliance document. This document will list every requirement for the design,
identify if the design is compliant or not, and list where this compliance is proven
in the documentation.
A sample line may look like this:

Requirement C/NCVerification
Drawing
Max Dimensions 1″ x 2″ x 4″ C 123456-7

The obvious importance in this step is to make sure that the design has not missed
addressing any requirements. If requirements are not compliant, meaning the
design does not meet a requirement, now is the time to know this and negotiate
with the customer if this requirement is necessary or can be relaxed.

Design and Development Validation

Validation is the step where you actually build a version of the product, and would
be done against the requirements as modified after verification. This does not
necessarily mean the first production unit, but it can. It can also be an engineering
model, which some companies use to prove the first run of a complicated new
design, or it can be a portion of the design which is different from a previous
model, when the design is a modification of an already-proven design. Once you
decide what representative product you will build to prove the design, you fully
test it to make sure that the product, as designed, will meet all the necessary
requirements defined in the Design Inputs.

This will often require more testing that will be used on production models. To
ensure that all requirements are met, a full set of measurement and tests is done on
the validation unit. In some industries this is referred to as a First Article
Inspection (FAI), a first off, or Production Part Approval Process (PPAP).
Depending on customer requirements this can be recorded as a standalone
document, or an addition to the Statement of Compliance created in the verification
step.

A sample line may look like this:

Requirement C/NCVerification Value Validation


Max Dimensions Drawing 0.95″ x 1.98″ xCMM Report
1″ x 2″ x 4″ C 123456-7 3.85″ Order#98765

After validation, the full set of requirements on one unit of most products can have
a reduced level of inspection and testing, depending on factors such as
requirements criticality or manufacturing process capability. A good product
validation can help decide which requirements need to be checked on every
product, and which do not.

Verification vs Validation – Satisfying Customer Needs

Each of these steps is important in the design process because they serve two
distinct functions. Verification is a theoretical exercise designed to make sure that
no requirements are missed in the design, whereas validation is a practical exercise
that ensures that the product, as built, will function to meet the requirements.
Together, they ensure that the product designed will satisfy the customer needs,
and the needs of the customer are one of the key focuses for ISO 9001 and
improving Customer Satisfaction.
Software Testing - Validation Testing

The process of evaluating software during the development process or at the end of
the development process to determine whether it satisfies specified business
requirements.

Validation Testing ensures that the product actually meets the client's needs. It can
also be defined as to demonstrate that the product fulfills its intended use when
deployed on appropriate environment.

It answers to the question, Are we building the right product?

Validation Testing - Workflow:

Validation testing can be best demonstrated using V-Model. The Software/product


under test is evaluated during this type of testing.
Activities:

 Unit Testing 

 Integration Testing 

 System Testing 

 User Acceptance Testing 

 What is Verification Testing ? 

 Verification is the process of evaluating work-products of a development
phase to determine whether they meet the specified requirements. 
 verification ensures that the product is built according to the requirements
and design specifications. It also answers to the question, Are we building
the product right? 

 Verification Testing - Workflow: 

 verification testing can be best demonstrated using V-Model. The artefacts
such as test Plans, requirement specification, design, code and test cases are
evaluated. 

Activities:Reviews

 Walkthroughs 

 Inspection 
SOFTWARE TESTING

TH
UNIT 5 BCA 5 SEM

TESTING TOOLS

Introduction

Development of complex software products includes several activities like


understanding the customer requirements, design, coding and integration
etc…These activities need to be coordinated and reviewed suitability to meet the
desired requirements. Because of the complex nature of the process involved,
unreliability of the designers and developers, any software development must be
accompanied by Quality Assurance activities. Testing of the developed code or
software is one such activity. It is not unusual that 40% of the effort in developing
software is spent on Testing

Software Testing is a process of verifying and validating software application to


ensure that it meets the business, technical and functional requirements. A testing
process requires accurate planning and preparation before doing the actual testing.
According to Hertzel, “ Testing is the process of establishing confidence that a
program does what it is supposed to do” and Mr. Glenford Myers define software
testing, in this book, ‘ The Art of Software Testing ‘as” Testing is the process of
executing a program with the intent of finding errors”.
Now a day, Software testing became large and complex. The software deliverable
includes program, document, data etc… so looking at Software testing as only
testing of the Code has become irrelevant and has now a broader scope, which
includes all the components. Testing holds several characteristic, such as

•Software testing starts with testing of requirements

•Software testing includes both static testing and dynamic testing

•Software testing aims at preventing the occurrence of failure

• Aims to fix the defects or faults at the early stages of the development life cycle
and reduce the

cost to fix the bugs.

•Crates reusable test wares

Software testing , depending on the testing method employed, can be implemented


at any time in the development process, however the most test effort is employed
after the requirements have been defined and coding process has been completed.

Scope of Software Testing

A primary purpose for testing is to detect software failures so that defects may be
uncovered and corrected. This is a non-trivial pursuit. Testing cannot establish that
a product functions properly under all conditions but can only establish that it does
not function properly under specific conditions. The scope of software testing often
includes examination of code as well as execution of that code in various
environments and conditions as well as examining the aspects of code: does it do
what it is supposed to do and do what it needs to do. In the current culture of
software development, a testing organization may be separate from the
development team. There are various roles for testing team members. Information
derived from software testing may be used to correct the process by which
software is developed.

Defects and Failures

Not all software defects are caused are caused by coding errors. One common
source of expensive defects is caused by requirement gaps, e.g., unrecognized
requirements that result in errors of omission by the program designer. A common
source of requirements that result in errors of omission by the program designer. A
common source of requirements gaps is non-functional requirements such as
testability scalability, maintainability, usability, performance, and security.

Software faults occur through the following processes. A programmer makes an


error (mistake), which results in a defect (fault, bug) in the software source code. If
this defect is executed, in certain situations the system will produce wrong results,
causing a failure. Not all defects will necessarily result in failures. For example,
defects in dead code will never result in failures. A defect can turn into a failure
when the environment is changed. Examples of these changes in environment
include the software being run on a new hardware platform, alterations in source
data or interacting with different software. A single defect may result in a wide
range of failure symptoms.

Compatibility
A frequent cause of software failure is compatibility with another application, a
new operating system, or, increasingly, web browser version. In the case of lack of
backward compatibility, this can occur because the programmers have only
considered coding their programs for, or testing the software upon, “the latest
version of” this- or- that operating system. The unintended consequence of this fact
is that: their latest work might not be fully compatible with earlier mixtures of
software/ hardware, or it might not be fully compatible with another important
operating system. In any case, these differences, whatever they might be, may have
resulted in software failures, as witnessed by some significant population of
computer users.

Principles of Software testing

Software testing is a challenging task which requires creativity. Software testing


can be effective and efficient by applying the established principles over the years.
Some of them are:

• Testing must be done by a person or team external to the development team

• Assign responsibility to create Test case to personnel who are highly creative and
have expertise in implementing those test cases

• The objective of the testing must be to identify more no. of errors

• Test for all invalid and unexpected input conditions as well as valid input
conditions
• Identify and include expected results

• Don’t modify the software during testing

• Prepare test reports along with the test case

• Choose an appropriate testing method

The major purpose of software testing is to increase the software development


team’s confidence that the software will function as per the requirements of the
customer. It is the process of dynamic verification of software‘s functionality on
selected test cases from infinite execution domain. Since the objective of the
testing is to list out maximum number of errors, a successful test is the one which
uncovers maximum number of errors.

A good software tester aims should be to design test cases that uncovers more
number of errors with a minimum amount of effort and time.

Software Testing Methods

Software testing methods are traditionally divided into black box testing and white
box testing. These two approaches are used to describe the point of view that a test
engineer tasks when designing test cases.

Black Box Testing


Black box testing treats the software as a “black box” & without any knowledge of
internal implementation. Black box testing methods include: equivalence
partitioning, boundary value analysis, all-pairs testing, fuzz testing, model-based
testing, traceability matrix, exploratory testing and specification-based testing.

Specification- Based Testing

Specification-based testing aims to test the functionality of software according to


the applicable requirements. Thus, the tester inputs data into, and only sees the
outputs from, the test object. This level of testing usually requires through test
cases to be provided to the tester, who then can simply verify that for a given input,
the output value ( or behavior), either “is” or is not” the same as the expected value
specified in the test case. Specification-based testing is necessary, but it is
insufficient to guard against certain risks.

Advantages and disadvantages

The black box tester has no “bonds” with the code, and a tester’s perception is very
simple: a code must have bugs. Using the principle, “Ask and you shall receive,”
black box testers find bugs where programmers don’t. But, on the other hand,
black box testing has been said to be “like a walk in a dark labyrinth without a
flashlight, “because the tester doesn’t know how the software being tested was
actually constructed. That’s why there are situations when (1) a black box tester
writes many test cases to check something that can be tested by only one test case,
and/or (2) some parts of the black and are not tested at all.
Therefore, black box testing has the advantage of “an unaffiliated opinion,” on the
one hand, and the disadvantage of “blind exploring,” on the other.

White Box Testing

White box testing, by contrast to black box testing, is when the tester has access to
the internal data structures and algorithms including the code that implement these.

Types of white box testing

The following types of white box testing exist:

• API testing (application programming interface) - testing of the application using


public and private APIs.

• Code coverage-creating tests to satisfy some criteria of code coverage. For


example, the test designer can create tests to cause all statements in the program to
be executed at least once.

• Fault injection methods.

• Mutation testing methods.

• Static testing- Whit box testing includes all static testing.

Code Completeness Evaluation


White box testing methods can also be used to evaluate the completeness of a test
that was created with black box testing methods. This allows the software tam to
examine parts of a system that are rarely tested and ensures that the most important
function points have been tested.

Two common forms of code coverage are

• Function coverage, which reports on functions executed and statement coverage,


which reports on the number of lines executed to complete the test.

• They both return coverage metric, measured as a percentage.

Grey Box Testing

Grey box testing involves having access to internal data structures and algorithms
for purposes of designing the test cases, but testing at the user, or black-box level.
Manipulating input data and formatting output do not qualify as grey box, because
the input and output are clearly outside of the “black-box” that we are calling the
system under test. This distinction is particularly important when conducting
integration testing between two modules of code written by two different
developers, where only the interfaces are ‘exposed for test. However, modifying a
data repository does qualify as grey box, as the user would not normally be able to
change the data outside of the system under test. Grey box testing may also include
reverse engineering to determine, for instance, boundary values or error messages.

Acceptance Testing
Acceptance testing can mean one of two things:

1. A smoke test is used as an acceptance test prior to introducing a build to the


main testing process.

2. Acceptance testing performed by the customer is known as user acceptance


testing (UAT).

Regression Testing

Regression testing is any type of software testing that seeks to uncover software
regressions. Such regression occurs whenever software functionality that was
previously working correctly stops working as intended. Typically regression
testing includes re-running previously working correctly stops working as
intended. Typically regressions occur as an unintended consequence of program
changes. Common methods of regression testing include re-running previously run
tests and checking whether previously fixed faults have re- emerged.

Non Functional Software Testing

Special methods exist to test non-functional aspects of software.

• Performance testing checks to see if the software can handle large quantities of
data or users. This is generally referred to as software scalability. This activity of
Non Functional Software Testing is often times referred to as Load Testing.
• Stability testing checks to see if the software can continuously function well in or
above an acceptable period. This activity of Non Functional Software Testing is
often times referred to as enduration test.

• Usability testing is needed to check if the user interface is easy to use and
understand.

• Security testing is essential for software which processes confidential data and to
prevent system intrusion by hackers.

• Internationalization and localization is needed to test these aspects of software,


for which a pseudo localization method can be used.

In contrast to functional testing, which establishes the correct operation of the


software (correct in that it matches the expected behavior defined in the design
requirements), non-functional testing verifies that the software functions properly
even when it receives invalid or unexpected inputs. Software fault injection, in the
form of fizzing is an example of non-functional testing. Non-functional testing,
especially for software, is designed to establish whether the device under test can
tolerate invalid or unexpected inputs, thereby establishing the robustness of input
validation routines as well as error-handling routines. Various commercial non-
functional testing tools are linked from the Software fault injection page; there are
also numerous open-source and free software tools available that perform non-
functional testing.
Testing process

A common practice of software testing is performed by an independent group of


testers after the functionality is developed before it is shipped to the customer. This
practice often results in the testing phase being used as project buffer to
compensate for project delays, thereby compromising the time devoted to testing.
Another practice is to start software testing at the same moment the project starts
and it is a continuous process until the project finishes.

In counterpoint, some emerging software disciplines such as extreme programming


and the agile software development movement, adhere to a “test- driven software
development “model. In this process, unit tests are written first, by the software
engineers (often with pair programming in the extreme programming
methodology). Of course these tests fail initially: as they are expected to. Then as
code is written it passes incrementally larger portions of the test suites. The test
suites are continuously updated as new failure conditions and corner cases are
discovered, and they are integrated with any regression tests that are development
.Unit tests are maintained along with the rest of the software source code and
generally integrated into the build process( with inherently interactive tests being
relegated to a partially manual build acceptance process).

Testing can be done on the following levels:

• Unit testing tests the minimal software component, or module. Each unit (basic
component) of the software is tested to verify that the detailed design for the unit
has been correctly implemented. In an object-oriented environment, this is usually
at the class level, and the minimal unit tests include the constructors and
destructors.

• Integration testing exposes defects in the interfaces and interaction between


integrated components (modules). Progressively larger groups of tested software
components corresponding to elements of the architectural design are integrated
and tested until the software works as a system.

• System testing tests a completely integrated system to verify that it meets its
requirements.

• System integration testing verifies that a system is integrated to any external or


third party systems defined in the system requirements.

Before shipping the final version of software, alpha and beta testing are often done
additionally:

• Alpha testing is simulated or actual operational testing by potential users/


customers or an independent test team at the developers’ site. Alpha testing is often
employed for off-the-shelf software as a form of internal acceptance testing, before
the software goes to beta testing.

• Beta testing comes after alpha testing. Versions of the software, known as beta
versions, are released to a limited audience outside of the programming team. The
software is released to groups of people so that further testing can ensure the
product has few faults or bugs. Sometimes, beta versions are made available to the
open public to increase the feedback field to a maximal number of future users.
Finally, acceptance testing can be conducted by the end-user, customer, or client to
validate whether or not to accept the product. Acceptance testing may be
performed as part of the hand- off process between any two phases of
development.

After modifying software, either for a change in functionality or to fix defects, a


regression test re-runs previously passing tests on the modified software to ensure
that the modifications haven’t unintentionally caused a regression of previous
functionality. Regression testing can be performed at any or all of the above test
levels. These regression tests are often automated.

More specific forms of regression testing are known as sanity testing, when
quickly checking for bizarre behavior, and smoke testing when testing for basic
functionality.

Benchmarks may be employed during regression testing to ensure that the


performance of the newly modified software will be at least as acceptable as the
earlier version or, in the case of code optimization, that some real improvement has
been accomplished.

Measuring Software Testing

Usually, quality is constrained to such topics as correctness, completeness,


security, but can also include more technical requirements as described under the
ISO standard ISO 9126, such as capability, reliability, efficiency, portability,
maintainability, compatibility, and usability.

There are number of common software measures, often called “metrics”, which are
used to measure the stat of the software or the adequacy of the testing.

Testing Artifacts

Software testing process can produce several artifacts.

Test case

Test case in software engineering normally consists of an unique identifier,


requirement references from a design specification, preconditions, events, a series
of steps ( also known as actions) to follow, input, output, expected result, and
actual result. Clinically defined a test case is an input and an expected result, and
actual result. Clinically defined a test case is an input and an expected result. This
can be as pragmatic as “for condition x your derived result is y”, whereas other test
cases described in more detail the input scenario and what results might be
expected. It can occasionally be a series of steps (but often steps are contained in a
separate test procedure that can be exercised against multiple test cases, as a matter
of economy) but with one expected outcome. The optional fields are a test case ID,
test step or order of execution number, related requirement(s), depth, test category,
author, and check boxes for whether the test is automatable and has been
automated. Larger test cases may also contain prerequisite states or steps, and
descriptions. A test case should also contain a place for actual result. These steps
can be stored in a word processor document, spreadsheet, database, or other
common repository. In a database system, you may also be able to see past test
results and who generated the results and the system configuration used to generate
those results. These past results would usually be stored in a separate table.

Test script

The test script is the combination of a test case, test procedure, and data. Initially
the term was derived from the product of work created by automated regression
test tools. Today, test scripts can be manual, automated, or a combination of both.
Test data

The most common test manually or in automation is retesting and regression


testing. In most cases, multiple sets of values or data are used to test the same
functionality of a particular feature. All the test values and changeable
environmental components are collected in separate files and stored as test data. It
is also useful to provide this data to the client and with the product or a project.

Test suite

The most common tam for a collection of test cases is a test suite. The test suit
often also contains more detailed instructions or goals for each collection of test
cases. It definitely contains a section where the tester identifies the system
configuration used during testing. A group of test cases may also contain
prerequisite states or steps, and descriptions of the following tests.

Test plan

A test specification is called a test plan. The developers are well aware what test
plans will be executed and this information is made available to the developers.
This makes the developers more cautious when developing their cod. This ensures
that the developer’s code is not passed through any surprise test case or test plans.

Test harness

The software, tools, samples of data input and output, and configurations are all
referred to collectively as a test harness.

Traceability matrix

A traceability matrix is a table that correlates requirements or design documents to


test documents. It is used to change tests when the source documents are changed,
or to verify that the test results are correct.

Testing tools

With the trend of automating the task, there are many software tools available,
which may be used to help the testing process. Such tools help to achieve accurate
results in reduced time and effort. The availability and capability of testing tools
often enhance the level of testability of software.

Program testing and fault detection can be aided significantly by testing tools and
debuggers. Testing/debug tools include features such as:
Program monitors, permitting full or partial monitoring of program code including:

Instruction Set Similar, permitting step-by-step execution and conditional


breakpoint at source

level or in machine code.

- Program animation, permitting step-by-step execution and conditional breakpoint


at source

level or in machine code

- code coverage reports

• Formatted dump or Symbolic debugging, tools allowing inspection of program


variables on error or at chosen points

Benchmarks, allowing run-time performance comparisons to be made.

Performance analysis, or profiling tools that can help to highlight hot spots and
resource usage

Some of these features may be incorporated into an integrated development


environment (IDE).

What Are Software Analysis Tools?

Just as there are software tools available to assist in the basic building of software
code, there are tools that monitor how software is behaving as it runs. These
software analysis tools offer visibility into the execution history of an application.
There are four basic types of software analysis tools:
> Code Coverage—Measures the amount of the software that has been
executed
> Instruction Trace—Creates a record of exactly what happens as the code is
executed
> Memory Analysis—Tracks the code's memory usage and identifies possible
errors
> Performance Analysis—Identifies performance bottlenecks and other issues
allowing fine-tuning of the application for higher performance

The primary difference between software analysis tools and traditional debugging
is that software analysis tools do not require you to stop the application to test it.
Debugging involves starting and stopping the software repeatedly to examine the
code that was executed in order to understand the control flow inside the
application.
The debugging method is problematic in an embedded environment, where
systems cannot always be stopped, or where stopping the system inhibits or skews
the analysis. Consider the powertrain software that controls an automobile's
transmission system. With a real-time system such as this one, it is important to
use tools that can gather information and monitor the control flows as the
application runs, so the developer can be assured that performance and other
operational design specifications are met. This technique helps ensure that the
system will not fail once deployed.
Traditional debugging also comes up short in environments that have multiple
processors within the same system. Increasingly common in today's systems,
multiple threads have multiple tasks running at any given time. It is important to
monitor what is happening in each thread—and how the threads are interacting—
without disturbing the other threads by stopping the application. The ideal software
analysis tool attaches to a running system with as little intrusion as possible.

The Benefits of Using Software Analysis Tools


Accelerate the Development Process
The primary benefits of using software analysis tools are a deeper understanding of
how an application is really performing and the identification of errors earlier in
the development process. It is crucial to fix errors before they can cause problems
in downstream development. These tools find problems that cannot be detected
with stop mode debug operations.
To truly understand the interactions between multiple threads performing
simultaneous tasks, one must be able to examine how the application interacts with
the RTOSes being used. Monitoring the real-time data flows also aids in the
detection of memory issues, making troubleshooting easier. The end result is an
accelerated development process, as potential problems are found and dealt with in
a timelier manner. In particular, Memory
Analysis and Instruction Trace are the two tools that have the greatest direct impact
on the speed of development.
Tune for Maximum Performance
Software analysis tools can help verify the performance accuracy of an application.
By eliminating unused code and tightening processing loops, the code can be tuned
for maximum performance, ensuring that the whole performs better than the sum
of the parts. Performance analysis can also be used to ensure that real-time
specifications are met. Instrumentation has a distinct advantage here, because it
identifies every execution of a particular function. Conversely, sampling tools may
miss the single occurrence that happens to fall outside the
design tolerances.
Improve Software Quality
By eliminating potential errors and memory leaks, larger problems that may arise
in real-world situations can be prevented. Software analysis tools can track down
hard-to-find problems ranging from memory leaks to strange interactions that
develop during execution. The ideal Memory Analysis tool can reset its data during
an execution without affecting the software, making it easier to identify where in
the source code memory leaks are occurring.
A Code Coverage tool can provide metrics that are useful for improving the QA
process, and thereby the quality of current and future releases. Coverage analysis is
a good way to measure the thoroughness of QA efforts.While testing is a must,
verifying the quality of the test itself can increase confidence in the application. It
is important to verify the performance of any test harnesses used, particularly for
software used in mission-critical systems.

Static analysis tools are generally used by developers as part of the development
and component testing process. The key aspect is that the code (or other artefact) is
not executed or run but the tool itself is executed, and the source code we are
interested in is the input data to the tool.

These tools are mostly used by developers.

Static analysis tools are an extension of compiler technology – in fact some


compilers do offer static analysis features. It is worth checking what is available
from existing compilers or development environments before looking at
purchasing a more sophisticated static analysis tool.
Other than software code, static analysis can also be carried out on things like,
static analysis of requirements or static analysis of websites (for example, to assess
for proper use of accessibility tags or the following of HTML standards).

Static analysis tools for code can help the developers to understand the structure of
the code, and can also be used to enforce coding standards.

Features or characteristics of static analysis tools are:

 To calculate metrics such as cyclomatic complexity or nesting levels


(which can help to identify where more testing may be needed due to
increased risk). 

 To enforce coding standards. 

 To analyze structures and dependencies. 

 Help in code understanding. 

 To identify anomalies or defects in the code

What is Test harness/ Unit test framework tools in software testing?

These tools are mostly used by developers. These two types of tool are grouped
together because they are variants of the type of support needed by developers
when testing individual components or units of software. A test harness provides
stubs and drivers, which are small programs that interact with the software under
test (e.g. for testing middleware and embedded software). Some unit test
framework tools provide support for object-oriented software, others for other
development paradigms. Unit test frameworks can be used in agile development to
automate the tests parallely with development. Both types of tool enable the
developer to test, identify and localize any defects. The stubs and drivers supply
any information needed by the software being tested (e.g. an input given by the
user) and also receive any information sent by the software (e.g. a value to be
displayed on a screen). Stubs may also be referred to as ‘mock objects’.

There are many ‘xUnit’ tools for different programming languages, e.g. JUnit for
Java, N Unit for .Net applications, etc. There are both commercial tools and also
open-source (i.e. free) tools. Unit test framework tools are very similar to test
execution tools, since they provide facilities such as the ability to store test cases
and monitor whether tests pass or fail, for example.

The main difference is that there is no capture/playback facility and they tend to be
used at a lower level, i.e. for component or component integration testing, rather
than for system or acceptance testing.

Features or characteristics of test harnesses and unit test framework are:

 To supply inputs to the software being tested; 



 To receive outputs generated by the software being tested; 

 To execute a set of tests within the framework or using the test harness; 

 To record the pass/fail results of each test (framework tools); 

 To store tests (framework tools); 

 Provide support for debugging (framework tools); 

 To do coverage measurement at code level (framework tools). 

What is Test comparators in software testing?

A test comparator helps to automate the comparison between the actual and the
expected result produced by the software.
There are two ways in which actual results of a test can be compared to the
expected results for the test.:

i. Dynamic comparison is where the comparison is done dynamically, i.e. while


the test is executing. This type of comparison is good for comparing the wording of
an error message that pops up on a screen with the correct wording for that error
message. Dynamic comparison is useful when an actual result does not match the
expected result in the middle of a test – the tool can be programmed to take some
recovery action at this point or go to a different set of tests.

ii. Post-execution comparison is the other way, where the comparison is


performed after the test has finished executing and the software under test is no
longer running. Operating systems normally have file comparison tools available
which can be used for post-execution comparison and often a comparison tool will
be developed in-house for comparing a particular type of file or test result. Post-
execution comparison is best for comparing a large volume of data, for example
comparing the contents of an entire file with the expected contents of that file, or
comparing a large set of records from a database with the expected content of those
records. For example, comparing the result of a batch run (e.g. overnight
processing of the day’s online transactions) is probably impossible to do without
tool support.

Whether a comparison is dynamic or post-execution, the test comparator needs to


know what the correct result is. This may be stored in the test case itself or it may
be computed using a test oracle.

Features or characteristics of test comparators are:


•To do the dynamic comparison of transient events that occurs during test
execution;
•To do the post-execution comparison of stored data, e.g. in files or databases;
• To mask or filter the subsets of actual and expected results.

Debuggers

Introduction

Debugging means locating (and then removing) bugs, i.e., faults, in programs. In
the entire process of program development errors may occur at various stages and
efforts to detect and remove them may also be made at various stages. However,
the word debugging is usually in context of errors that manifest while running the
program during testing or during actual use. The most common steps taken in
debugging are to examine the flow of control during execution of the program,
examine values of variables at different points in the program, examine the values
of parameters passed to functions and values returned by the functions, examine
the function call sequence, etc. In the absence of other mechanisms, one usually
inserts statements in the program at various carefully chosen points, that prints
values of significant variables or parameters, or some message that indicates the
flow of control (or function call sequence). When such a modified version of the
program is run, the information output by the extra statements gives clue to the
errors.

Using print statements for debugging a program is often not adequate or


convenient. For example, the programmer may want to change the values of
certain variables (or parameters) after observing the execution of the program till
some point. For a large program it may be difficult to go back to the source
program, make the necessary changes (maybe temporarily) and rerun the program.
Again, if such print statements are placed inside loops, it will produce output every
time the loop is executed though the programmer may be interested in only certain
iterations of the loop. To overcome several such drawbacks of debugging by
inserting extra statements in the program, there are a kind of tool called debugger
that helps in debugging programs by giving the programmer some control over the
execution of the program and some means of examining and modifying different
program variables during runtime.

Basic Operations Supported by a Debugger

A debugger provides an interactive interface to the programmer to control the


execution of the program and observe the proceedings. The program (executable
file) to be debugged is provided as an input to the debugger. The basic operations
supported by a debugger are -

1. Breakpoints - Setting breakpoints at various positions in the program. The


breakpoints are points in the program at which the programmer wishes to
suspend normal execution of the program and perform other tasks.
2. Examining values of different memory locations - When the execution of a
program is suspended, the contents of specified memory locations can be
examined. This includes local variables (usually on the stack), function
parameters, and global (extern) variables.
3. Examining the contents of the program stack - The contents of the program
stack reveals information related to the function call sequence that is active
at that moment.
4. Depositing values in different memory locations - While the execution of the
debugged program is not underway (yet to start or suspended at a
breakpoint), the programmer can deposit any value in the memory locations
corresponding to the program variables, parameters to subroutines, and
processor registers.
5. Testing assertions - The programmer may specify relations involving
program values, that must hold at certain positions in the program during
execution. eg., after an assignment of the form a = b - c , b must be larger
than a (provided c is positive).
6. Detecting conditions - Suspend execution of the program whenever any user
defined condition involving the program variables and/or parameters is met.

Use of Source Program Symbols

Most debuggers allow the user to refer to the program information in terms of
symbols of source program, viz., variable names, subroutine names, parameter
names, field names of composite data structures (records), source program line
numbers (for specifying breakpoints), etc. Since an executable program usually do
not contain the mappings from source program symbols to target program
addresses, hence to be useful to a debugger, the compiler must include such
mappings in the executable program as additional information (say debugger
information). Most compilers support some invocation-option for this purpose (eg.
in Unix/Linux, the cc option -g). Format of this information created by a compiler
must be understandable by a debugger.
Principle of Operation
The principle of operation of a debugger can be understood by considering a
simple view - from the specially compiled executable program the debugger reads
the debugger information into its own data structures. The interactive features of
the debugger is in the form of a module (that can be invoked as a function call or
through a software interrupt). The interactive interface is invoked by the debugger
once at the beginning. The user can specify breakpoints through the interface and
then tell the debugger to start execution of the program to be debugged. The
program will continue till the first breakpoint is encountered. At that point the
control is transfered to the interactive interface. The programmer can carry out
various kinds of operations that are supported in that interface.

Setting of breakpoints

To make the user program stop at specified points, the debugger inserts certain
statement at those points that would transfer control to the interactive interface
module. These statements might be in the form of function call instruction or
software interrupt instruction. Usually in programs compiled specifically for
debugging, the compiler inserts NOP instructions after the translation of each
statement of the source program. So the debugger can simply replace a NOP
instruction by the function call or interrupt instruction. In some cases where NOP
instructions are not there,the debugger replaces valid instructions of the program to
insert its own function call (or interrupt) instruction, but takes care to have the
original instructions executed whenever execution proceeds through that point.

QUALITY ASSURANCE AND STANDARDS

Software Quality Control

SOFTWARE QUALITY CONTROL Fundamentals

Software Quality Assurance (SQA)


Though controversial, software testing may be viewed as an important part of the
software quality assurance (SQA) process. In SQA, software process specialists
and auditors take a broader view on software and its development. They examine
and change the software engineering process itself to reduce the amount of faults
that end up in the delivered software: the so-called defect rate.

What constitutes an “acceptable defect rate” depends on the nature of the software.
For example, an arcade video game designed to simulate flying an airplane would
presumably have a much higher tolerance for defects than mission critical software
such as that used to control the functions of an airliner that rally is flying!
Although there are close links with SQA, testing departments often exist
independently, and there may be no SQA function in some companies. Software
Testing is a task intended to detect in software by contrasting a computer
program’s expected results with its actual results for a given set of inputs. By
contrast, QA (Quality Assurance) is the implementation of policies and procedures
intended to prevent defects from occurring in the first place .Software Quality
Control (SQC) is a set of activities for ensuring quality in software products.

It includes the following activities:

Reviews

Requirement Review

Design Review

Code Review

Deployment Plan Review


Test Plan Review

Test Cases Review

Testing

Unit Testing

Integration Testing

System Testing

Acceptance Testing

Software Quality Control is limited to the Review/Testing phases of the Software


Development Life Cycle and the goal is to ensure that the products meet
specifications/requirements.

The process of Software Quality Control (SQC) is governed by Software Quality


Assurance (SQA). While SQA is oriented towards prevention, SQC is oriented
towards detection. Read Differences between Software Quality Assurance and
Software Quality Control.

McCall’s Quality Factors

Product Operation Correctness Does it do what I want?

Reliability Does it do it accurately all of the time?

Efficiency Will it run on my hardware as well as it can?

Integrity Is it secure?

Usability Can I run it?


Product Revision Maintainability Can I fix

it? Testability Can I test it?

Flexibility Can I change it?

Product Transition Portability Will I be able to use it on another

machine? Reusability Will I be able to reuse some of the software?

Interoperability Will I be able to interface to another system?

McCall’s Quality Criteriaa

Access audit Generality

Access control Hardware independence

Accuracy Instrumentation

Communication commonality

Modularity Completeness Operability

Communicativeness Self-documentation

Conciseness Simplicity

Consistency Software system independence

Data commonality Storage efficiency

Error tolerance Traceability

Execution efficiency Training

Software Quality Assurance


Software Quality Assurance (SQA) is a set of activities for ensuring quality in
software engineering processes (that ultimately result in quality in software
products).

It includes the following activities:

Process definition and implementation

Auditing

Training

Processes could be:

Software Development Methodology

Project Management

Configuration Management

Requirements Development/Management

Estimation

Software Design

Testing

Once the processes have been defined and implemented, Quality Assurance has the
following responsibilities:

 identify weaknesses in the processes 



 correct those weaknesses to continually improve the process 
Note: There are many other models/standards for quality management but the ones
mentioned above are the most popular.

Software Quality Assurance encompasses the entire software development life


cycle and the goal is to ensure that the development and/or maintenance processes
are continuously improved to produce products that meet
specifications/requirements.

The process of Software Quality Control (SQC) is also governed by Software


Quality Assurance (SQA).

Software Quality Assurance (SQA)

SQA is a collection of activities during software development that focus on

increasing the quality of the software being produced

SQA includes

– Analysis, design, coding and testing methods and tools

– Formal Technical reviews during software development

– A multi-tiered testing strategy

– Control of software documentation and the changes made to it

– Procedures to ensure compliance with software development standards

– Software measurement and reporting mechanisms

SQA is often conducted by an independent group in the organization

Often this group has the final veto over the release of a software product
SQA Activities

1. Application of Technical Methods

Tools to aid in the production of a high quality specification

i.e specification checkers and verifiers

1.Tools to aid in the production of high-quality designs i.e. design browsers,


checkers, cross-references, verifiers Tools to analyze source code for quality

2. Formal Technical Reviews

Group analysis of a specification or design to discover errors

3. Software Testing

4. Enforcement of standards

Specification and design standards

Implementation standards, e.g portability

Documentation standards

Testing standards

5. Control of Change

Formal management of changes to the software and documentation

Changes require formal request to approving authority

Approving authority makes decision on which changes get implemented and when
Programmers are not permitted to make unapproved changes to the software
Opportunity to evaluate the impact and cost of changes before committing
Resources ,Evaluate effect of proposed changes on software quality

6. Measurement

Ongoing assessment of software quality,Track quality changes as system


evolves,Warn management if software quality appears to be degrading,Record
Keeping and Reporting,Collect output and reports of SQA activities.

Disseminate reports to software managers,Maintain archive of SQA


reports,Maintain log of software development activity (especially testing) to
satisfy,legal requirements.

Maintain institutional memory of the software development effort

What is Quality Assurance?

Quality Assurance is defined as the auditing and reporting procedures used to


provide the stakeholders with data needed to make well-informed decisions.

It is the Degree to which a system meets specified requirements and customer


expectations. It is also monitoring the processes and products throughout the
SDLC.

Quality Assurance Criteria:

Below are the Quality assurance criteria against which the software would be
evaluated against:

Correctness,efficiency,flexibility,integrity,interoperability,maintainability,portabili
ty,reliability,reusability, stability,usability

What is Quality Control?


Quality control is a set of methods used by organizations to achieve quality
parameters or quality goals and continually improve the organization's ability to
ensure that a software product will meet quality goals.

Quality Control Process:

The three class parameters that control software quality are:


 
 Products
 
 Processes
 
Resources

The total quality control process consists of:

Plan - It is the stage where the Quality control processes are planned

Do - Use a defined parameter to develop the quality

Check - Stage to verify if the quality of the parameters are met

Act - Take corrective action if needed and repeat the work

Quality Control characteristics:

Process adopted to deliver a quality product to the clients at best cost.


Goal is to learn from other organizations so that quality would be better each time.

To avoid making errors by proper planning and execution with correct review
process.

What is Software Quality Management?

Software Quality Management ensures that the required level of quality is achieved
by submitting improvements to the product development process. SQA aims to
develop a culture within the team and it is seen as everyone's responsibility.

Software Quality management should be independent of project management to


ensure independence of cost and schedule adherences. It directly affects the
process quality and indirectly affects the product quality.

Activities of Software Quality Management:

Quality Assurance - QA aims at developing Organizational procedures and


standards for quality at Organizational level.

Quality Planning - Select applicable procedures and standards for a particular


project and modify as required to develop a quality plan.

Quality Control - Ensure that best practices and standards are followed by the
software development team to produce quality products.
Differences between Software Quality Assurance (SQA) and Software Quality
Control (SQC):

Software Quality Control


Criteria Software Quality Assurance (SQA)
(SQC)

SQA is a set of activities for ensuring


SQC is a set of activities for
quality in software engineering
ensuring quality in software
processes (that ultimately result in
Definition products. The activities focus
quality in software products). The
on identifying defects in the
activities establish and evaluate the
actual products produced.
processes that produce products.

Focus Process focused


Product focused
Orientation Prevention oriented
Detection oriented
Breadth Organization wide
Product/project specific
Relates to all products that will ever
Scope Relates to specific product
be created by a process

Process Definition and


Implementation Reviews
Activities
Audits Testing

Training
The quality management system under which the software system is created is
normally based on one or more of the following models/standards:

ISO 9000

CMMI

ISO 9000 Registration

The effort required to obtain ISO 9000 registration varies directly with how closely
an organization’s process fits the ISO 9000 model.

ISO 9000 registration is granted when an accredited inspection organization


certifies that the organization’s practices conform to the ISO standard.

Re-registration is required every 3 years and surveillance audits are performed


every 6 months.

ISO registration can cost a lot of time, effort and money to achieve. It requires
continuing effort to stay registered

ISO 9126 Standard

Another product oriented attempt to define software quality attributes.

A user view of software quality.ISO 9126 doesn’t address software process issues.

ISO 9000

A set of quality standards developed so that purchasers of goods can have


confidence that suppliers of these goods have produced something of acceptable
quality.
ISO 9000 certification has become a widely required international standard. Any
supplier who is not ISO 9000 certified will find it difficult to sell their goods.The
ISO standard addresses design, development, production, installation and
maintenance issues.

The emphasis in the ISO standard is on documentation of the process and the
managing of the process.

ISO 9001 Components

1. Management responsibility 11. Control of inspection, measuring, test equipment

2. Quality system 12. Inspection and test status

3. Contract review 13. Control of non-conforming product

4. Design control 14. Corrective and preventive action

5. Document and data control 15. Handling, storage, packaging, preservation,


delivery

6. Purchasing 16. Control of Quality records

7. Control of customer-supplied 17. Internal Quality Audits product

8. Product identification. Training and traceability

9. Process control.

A Cynic’s View of ISO 9000 Registration

ISO 9000 Certification focuses on how well the processes are documented,
not on the quality of the process.Many companies do the minimum required to
achieve ISO 9000 certification for business reasons, but forget about it as soon as
the ISO 9000 inspectors

have signed off.

ISO 9000 forces companies to act in ways which make things worse for their
customers. ISO 9000 is based on the faulty premise that work is best controlled by
specifying and controlling procedures.

The SEI’s Capability Maturity Model

The Capability Maturity Model for Software (CMM) is a five level model laying
out

a generic path to process improvement for a software organization.

1. Initial – ad hoc

2. Repeatable – basic management. processes

3. Defined – management. and engineering processes

documented, standardized, integrated, and actually used.

4. Managed – measured and monitored and controlled using measurements.

5. Optimizing – Continuous process improvement is enabled by quantitative

feedback from the process and from piloting innovative ideas and

technologies.
CMM Levels and Key Process Areasa

1. Initial level

No formalized procedures, project plans, cost estimates.

Tools not adequately integrated.

Many problems overlooked/ignored.

Maintenance very difficult

Generally ad-hoc processes

2. Repeatable level

Requirements management

Software Project planning

Software project tracking and oversight

Software subcontract management

Software quality assurance

Software configuration management

3.Defined level

Organization process focus

Organization process definition

Training Program
Integration software management

Software product engineering

Intergroup coordination

Peer reviews

4. Managed level

Quantitative process management

Software Quality management

5. Optimizing level

Defect prevention

Technology change management

Process change management

People Capability Maturity Model

1. Initial

2. Repeatable

Management takes responsibility for managing its people

Staffing, Compensation, Training Performance Management Communication


Work environment

3. Defined.

Competency based workforce practices, Participatory culture,Competency-based


practices Career development.
Competency development

Workforce planning

Knowledge & skills analysis

4. Managed

Effectiveness measured and managed

High-performance teams developed

Organizational performance alignment

Organizational-competency management

Team-based practices

Team building

Mentoring

5. Optimizing

Continuous knowledge and skills improvement

Continuous workforce innovation

Coaching

Personal competency development

You might also like