Software Testing Chapter-1
Software Testing Chapter-1
Introduction To Software
Testing
Basics of Software Testing – faults, errors and failures
Testing objectives
Principles of testing
TESTING
• Testing is a support function that helps developers look good
by finding their mistakes before anyone else does........James
Bach [83]
DEFINITIONS • The underlying motivation of program testing is to affirm
software quality with methods that can be economically and
effectively applied to both large-scale and small-scale
systems......Miller [126]
• Testing is a concurrent lifecycle process of engineering, using
and maintaining testware (i.e. testing artifacts) in order to
measure and improve the quality of the software
being tested.....Craig [117]
1. Immediate Goals:
These objectives are the direct outcomes of testing. These objectives may be set at any
time during the SDLC process.
Bug Discovery:
This is the immediate goal of software testing to find errors at any stage of software
development.
The number of bugs is discovered in the early stage of testing.
The primary purpose of software testing is to detect flaws at any step of the
development process.
The higher the number of issues detected at an early stage, the higher the software
testing success rate.
• Bug Prevention:
This is the immediate action of bug discovery, that occurs as a result of bug discovery.
Everyone in the software development team learns how to code from the behavior and
analysis of issues detected, ensuring that bugs are not duplicated in subsequent phases
or future projects.
2. Long-Term Goals:
These objectives have an impact on product quality in the long run after one cycle of
the SDLC is completed.
Quality: This goal enhances the quality of the software product. Because software is
also a product, the user’s priority is its quality. Superior quality is ensured by
thorough testing. Correctness, integrity, efficiency, and reliability are all aspects that
influence quality.
Customer Satisfaction: This goal verifies the customer’s satisfaction with a developed
software product. The primary purpose of software testing, from the user’s
standpoint, is customer satisfaction.
Reliability: It is a matter of confidence that the software will not fail. In short,
reliability means gaining the confidence of the customers by providing them with a
quality product.
Risk Management: Risk is the probability of occurrence of uncertain events in the
organization and the potential loss that could result in negative consequences. Risk
management must be done to reduce the failure of the product and to manage risk
in different situations.
3. Post Implemented Goals:
After the product is released, these objectives become critical.
Reduce Maintenance Cost: Post-released errors are costlier to fix and difficult to
identify. Because effective software does not wear out, the maintenance cost of any
software product is not the same as the physical cost.
The failure of a software product due to faults is the only expense of maintenance.
Because they are difficult to discover, post-release mistakes always cost more to
rectify.
As a result, if testing is done thoroughly and effectively, the risk of failure is lowered,
and maintenance costs are reduced as a result.
• Improved Software Testing Process: These goals are known as post-
implementation goals. These goals improve the testing process for future use or
software projects. A project’s testing procedure may not be completely successful,
and there may be room for improvement. As a result, the bug history and post-
implementation results can be evaluated to identify stumbling blocks in the
current testing process that can be avoided in future projects.
• Differences between defect, bug and failure
Generally, when the system/application does not act as per expectation or
abnormally, we call it’s an error or it’s an fault and so on. Many of the newbies in
Software Testing industry have confusion in using this
Defect :
The bugs introduced by programmer inside the code is called as Defect.
Defect is defined as the deviation from the actual and expected result of
application or software or defects are defined as any deviation or irregularity from
the specifications mentioned in the product functional specification document.
Defect is solved by the developer in development phase or stage.
• Reasons for Defects:
Any deviation from the customer requirements is called as defect.
By giving wrong input may lead to defect.
Any error in logic code may lead to defect.
• Bug:
Sometimes most people are confused between defect and bug, they say that bug is the
informal name of defect. Actually bugs are faults in system or application which impact on
software functionality and performance.
Usually bugs are found in unit testing by testers.
• There are different types of bugs, some of them are given below.
Functional Errors
Compilation Errors
Missing commands
Run time Errors
Logical errors
Inappropriate error handling
Above given these errors lead to bug.
• Failure:
• Causes of Failure:
Human errors or mistakes may lead to failure.
Environmental conditions
The way in which system is used.
• Principles of Software Testing
1. Testing shows the presence of defects
2. Exhaustive testing is not possible
3. Early testing
4. Defect clustering
5. Pesticide paradox
6. Testing is context-dependent
7. Absence of errors fallacy
• Early Testing:
To find the defect in the software, early test activity shall be started. The defect
detected in the early phases of SDLC will be very less expensive.
For better performance of software, software testing will start at the initial
phase i.e. testing will perform at the requirement analysis phase.
• Defect clustering:
In a project, a small number of modules can contain most of the defects.
Pareto Principle to software testing state that 80% of software defect comes
from 20% of modules.
• Pesticide paradox:
Repeating the same test cases, again and again, will not find new bugs.
So it is necessary to review the test cases and add or update test cases to find
new bugs.
• Testing is context-dependent:
The testing approach depends on the context of the software developed.
Different types of software need to perform different types of testing.
For example, The testing of the e-commerce site is different from the testing of
the Android application.
• Absence of errors fallacy:
If a built software is 99% bug-free but it does not follow the user requirement
then it is unusable.
It is not only necessary that software is 99% bug-free but it is also mandatory to
fulfill all the customer requirements.
• Differences between Testing and Debugging
Testing Debugging
Testing is the process to find bugs and errors. Debugging is the process to correct the bugs found during testing.
It is the process to identify the failure of implemented code. It is the process to give the absolution to code failure.
There is no need of design knowledge in the testing process. Debugging can’t be done without proper design knowledge.
Testing can be done by insider as well as outsider. Debugging is done only by insider. Outsider can’t do debugging.
Testing can be manual or automated. Debugging is always manual. Debugging can’t be automated.
It is based on different testing levels i.e. unit testing, integration Debugging is based on different types of bugs.
testing, system testing etc.
Testing is initiated after the code is written. Debugging commences with the execution of a test case.
Verification Validation
It includes checking documents, design, codes and programs. It includes testing and validating the actual product.
It does not include the execution of the code. It includes the execution of the code.
Methods used in verification are reviews, walkthroughs, inspections Methods used in validation are Black Box Testing, White Box Testing
and desk-checking. and non-functional testing.
It checks whether the software conforms to specifications or not. It checks whether the software meets the requirements and
expectations of a customer or not.
It can find the bugs in the early stage of the development. It can only find the bugs that could not be found by the verification
process.
The goal of verification is application and software architecture and The goal of validation is an actual product.
specification.
Quality assurance team does verification. Validation is executed on software code with the help of testing team.
It consists of checking of documents/files and is performed by human. It consists of execution of program and is performed by computer.
• Software Testing Life Cycle (STLC)
It is a sequence of different activities performed during the software testing
process.
• Characteristics of STLC:
STLC is a fundamental part of Software Development Life Cycle (SDLC) but
STLC consists of only the testing phases.
STLC starts as soon as requirements are defined or software requirement
document is shared by stakeholders.
• STLC yields a step-by-step process to ensure quality software.
• Software Testing Life Cycle (STLC) Phases:
1. Requirement Analysis:
Requirement Analysis is the first step of Software Testing Life Cycle (STLC). In this
phase quality assurance team understands the requirements like what is to be
tested. If anything is missing or not understandable then quality assurance team
meets with the stakeholders to better understand the detail knowledge of
requirement.
Testing Life Cycle
• Test Planning:
Test Planning is most efficient phase of software testing life cycle where all
testing plans are defined. In this phase manager of the testing team
calculates estimated effort and cost for the testing work. This phase gets
started once the requirement gathering phase is completed.
• Test Case Development:
The test case development phase gets started once the test planning phase
is completed. In this phase testing team note down the detailed test cases.
Testing team also prepare the required test data for the testing. When the
test cases are prepared then they are reviewed by quality assurance team.
• Test Environment Setup:
Test environment setup is the vital part of the STLC. Basically test
environment decides the conditions on which software is tested. This is
independent activity and can be started along with test case development.
In this process the testing team is not involved. either the developer or the
customer creates the testing environment.
• Test Execution:
After the test case development and test environment setup test execution
phase gets started. In this phase testing team start executing test cases based
on prepared test cases in the earlier step.
• Test Closure:
This is the last stage of STLC in which the process of testing is analyzed.
• Testing metrics and Measurements
• Software Measurement:
A measurement is a manifestation of the size, quantity, amount or
dimension of a particular attribute of a product or process.
Software measurement is a titrate impute of a characteristic of a
software product or the software process. It is an authority within
software engineering. The software measurement process is defined
and governed by ISO Standard.
#2) %ge Test cases not executed: This metric is used to obtain the pending
execution status of the test cases in terms of %ge.
%ge Test cases not executed = (No. of Test cases not executed / Total no. of
Test cases written) * 100.
So, from the above data,
%ge Test cases Blocked = (35 / 100) * 100 = 35%
#3) %ge Test cases Passed: This metric is used to obtain the Pass %ge of the
executed test cases.
%ge Test cases Passed = (No. of Test cases Passed / Total no. of Test cases
Executed) * 100.
So, from the above data,
%ge Test cases Passed = (30 / 65) * 100 = 46%
#4) %ge Test cases Failed: This metric is used to obtain the Fail %ge of the
executed test cases.
%ge Test cases Failed = (No. of Test cases Failed / Total no. of Test cases
Executed) * 100.
So, from the above data,
%ge Test cases Passed = (26 / 65) * 100 = 40%
#5) %ge Test cases Blocked: This metric is used to obtain the blocked %ge of
the executed test cases. A detailed report can be submitted by specifying the
actual reason for blocking the test cases.
%ge Test cases Blocked = (No. of Test cases Blocked / Total no. of Test cases
Executed) * 100.
So, from the above data,
%ge Test cases Blocked = (9 / 65) * 100 = 14%
#8) Defect Leakage: Defect Leakage is the Metric which is used to identify the
efficiency of the QA testing i.e., how many defects are missed/slipped during the QA
testing.
Defect Leakage = (No. of Defects found in UAT / No. of Defects found in QA testing.) * 100
Suppose, During Development & QA testing, we have identified 100 defects.
After the QA testing, during Alpha & Beta testing, end-user / client identified 40 defects,
which could have been identified during QA testing phase.
Defect Leakage = (40 /100) * 100 = 40%
#9) Defects by Priority: This metric is used to identify the no. of defects identified based
on the Severity / Priority of the defect which is used to decide the quality of the
software.
• %ge Critical Defects = No. of Critical Defects identified / Total no. of Defects identified *
100
From the data available in the above table,
%ge Critical Defects = 6/ 30 * 100 = 20%
• %ge High Defects = No. of High Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge High Defects = 10/ 30 * 100 = 33.33%
• %ge Medium Defects = No. of Medium Defects identified / Total no. of Defects
identified * 100
From the data available in the above table,
%ge Medium Defects = 6/ 30 * 100 = 20%
• %ge Low Defects = No. of Low Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge Low Defects = 8/ 30 * 100 = 27%