Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
54 views

Software Testing Chapter-1

The document discusses software testing definitions, testing objectives and principles, the differences between defects, bugs and failures, and testing versus debugging. It covers topics like the immediate and long-term goals of testing, principles of testing like defect clustering and the pesticide paradox, and differences between testing and debugging processes.

Uploaded by

shyamkava01
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

Software Testing Chapter-1

The document discusses software testing definitions, testing objectives and principles, the differences between defects, bugs and failures, and testing versus debugging. It covers topics like the immediate and long-term goals of testing, principles of testing like defect clustering and the pesticide paradox, and differences between testing and debugging processes.

Uploaded by

shyamkava01
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 32

Chapter 1

Introduction To Software
Testing
Basics of Software Testing – faults, errors and failures

Testing objectives

Principles of testing

Testing and debugging

Testing metrics and measurements

Verification and Validation

Testing Life Cycle


Many practitioners and researchers have defined software
testing in their own way. Some are given below.
• Testing is the process of executing a program with the intent of
finding errors.........Myers [2]
• A successful test is one that uncovers an as-yet-undiscovered
SOFTWARE error........Myers [2]

TESTING
• Testing is a support function that helps developers look good
by finding their mistakes before anyone else does........James
Bach [83]
DEFINITIONS • The underlying motivation of program testing is to affirm
software quality with methods that can be economically and
effectively applied to both large-scale and small-scale
systems......Miller [126]
• Testing is a concurrent lifecycle process of engineering, using
and maintaining testware (i.e. testing artifacts) in order to
measure and improve the quality of the software
being tested.....Craig [117]
1. Immediate Goals:
These objectives are the direct outcomes of testing. These objectives may be set at any
time during the SDLC process.
Bug Discovery:
This is the immediate goal of software testing to find errors at any stage of software
development.
The number of bugs is discovered in the early stage of testing.
The primary purpose of software testing is to detect flaws at any step of the
development process.
The higher the number of issues detected at an early stage, the higher the software
testing success rate.
• Bug Prevention:
This is the immediate action of bug discovery, that occurs as a result of bug discovery.
Everyone in the software development team learns how to code from the behavior and
analysis of issues detected, ensuring that bugs are not duplicated in subsequent phases
or future projects.
2. Long-Term Goals:
These objectives have an impact on product quality in the long run after one cycle of
the SDLC is completed.
Quality: This goal enhances the quality of the software product. Because software is
also a product, the user’s priority is its quality. Superior quality is ensured by
thorough testing. Correctness, integrity, efficiency, and reliability are all aspects that
influence quality.
Customer Satisfaction: This goal verifies the customer’s satisfaction with a developed
software product. The primary purpose of software testing, from the user’s
standpoint, is customer satisfaction.
Reliability: It is a matter of confidence that the software will not fail. In short,
reliability means gaining the confidence of the customers by providing them with a
quality product.
Risk Management: Risk is the probability of occurrence of uncertain events in the
organization and the potential loss that could result in negative consequences. Risk
management must be done to reduce the failure of the product and to manage risk
in different situations.
3. Post Implemented Goals:
After the product is released, these objectives become critical.
Reduce Maintenance Cost: Post-released errors are costlier to fix and difficult to
identify. Because effective software does not wear out, the maintenance cost of any
software product is not the same as the physical cost.
The failure of a software product due to faults is the only expense of maintenance.
Because they are difficult to discover, post-release mistakes always cost more to
rectify.
As a result, if testing is done thoroughly and effectively, the risk of failure is lowered,
and maintenance costs are reduced as a result.
• Improved Software Testing Process: These goals are known as post-
implementation goals. These goals improve the testing process for future use or
software projects. A project’s testing procedure may not be completely successful,
and there may be room for improvement. As a result, the bug history and post-
implementation results can be evaluated to identify stumbling blocks in the
current testing process that can be avoided in future projects.
• Differences between defect, bug and failure
Generally, when the system/application does not act as per expectation or
abnormally, we call it’s an error or it’s an fault and so on. Many of the newbies in
Software Testing industry have confusion in using this
Defect :
The bugs introduced by programmer inside the code is called as Defect.
Defect is defined as the deviation from the actual and expected result of
application or software or defects are defined as any deviation or irregularity from
the specifications mentioned in the product functional specification document.
Defect is solved by the developer in development phase or stage.
• Reasons for Defects:
Any deviation from the customer requirements is called as defect.
By giving wrong input may lead to defect.
Any error in logic code may lead to defect.
• Bug:

Sometimes most people are confused between defect and bug, they say that bug is the
informal name of defect. Actually bugs are faults in system or application which impact on
software functionality and performance.
Usually bugs are found in unit testing by testers.

• There are different types of bugs, some of them are given below.
 Functional Errors
 Compilation Errors
 Missing commands
 Run time Errors
 Logical errors
 Inappropriate error handling
 Above given these errors lead to bug.
• Failure:

When a defect reaches the end customer, it is called as Failure.


Once the product is completed and it is delivered to the customers and if the
customer find any issues in product or software then it is the condition of
failure of product.
In other words, if an end user finds an issue in product then that particular
issue is called as failure.

• Causes of Failure:
Human errors or mistakes may lead to failure.
Environmental conditions
The way in which system is used.
• Principles of Software Testing
1. Testing shows the presence of defects
2. Exhaustive testing is not possible
3. Early testing
4. Defect clustering
5. Pesticide paradox
6. Testing is context-dependent
7. Absence of errors fallacy

• Testing shows the presence of defects:


Software testing reduces the presence of defects. Software testing talks about the
presence of defects and doesn’t talk about the absence of defects.
Software testing can ensure that defects are present but it can not prove that
software is defect-free.
Even multiple testing can never ensure that software is 100% bug-free. Testing can
reduce the number of defects but not remove all defects.
• Exhaustive testing is not possible:
It is the process of testing the functionality of the software in all possible inputs
(valid or invalid) and pre-conditions is known as exhaustive testing.
Exhaustive testing is impossible means the software can never test at every test
case. It can test only some test cases and assume that the software is correct
and it will produce the correct output in every test case.
If the software will test every test case then it will take more cost, effort, etc.,
which is impractical.

• Early Testing:
To find the defect in the software, early test activity shall be started. The defect
detected in the early phases of SDLC will be very less expensive.
For better performance of software, software testing will start at the initial
phase i.e. testing will perform at the requirement analysis phase.
• Defect clustering:
In a project, a small number of modules can contain most of the defects.
Pareto Principle to software testing state that 80% of software defect comes
from 20% of modules.
• Pesticide paradox:
Repeating the same test cases, again and again, will not find new bugs.
So it is necessary to review the test cases and add or update test cases to find
new bugs.
• Testing is context-dependent:
The testing approach depends on the context of the software developed.
Different types of software need to perform different types of testing.
For example, The testing of the e-commerce site is different from the testing of
the Android application.
• Absence of errors fallacy:
If a built software is 99% bug-free but it does not follow the user requirement
then it is unusable.
It is not only necessary that software is 99% bug-free but it is also mandatory to
fulfill all the customer requirements.
• Differences between Testing and Debugging
Testing Debugging

Testing is the process to find bugs and errors. Debugging is the process to correct the bugs found during testing.

It is the process to identify the failure of implemented code. It is the process to give the absolution to code failure.

Testing is the display of errors. Debugging is a deductive process.

Testing is done by the tester. Debugging is done by either programmer or developer.

There is no need of design knowledge in the testing process. Debugging can’t be done without proper design knowledge.

Testing can be done by insider as well as outsider. Debugging is done only by insider. Outsider can’t do debugging.

Testing can be manual or automated. Debugging is always manual. Debugging can’t be automated.

It is based on different testing levels i.e. unit testing, integration Debugging is based on different types of bugs.
testing, system testing etc.

Debugging is not an aspect of software development life cycle, it


Testing is a stage of software development life cycle (SDLC). occurs as a consequence of testing.

While debugging process seeks to match symptom with cause, by


Testing is composed of validation and verification of software. that it leads to the error correction.

Testing is initiated after the code is written. Debugging commences with the execution of a test case.
Verification Validation
It includes checking documents, design, codes and programs. It includes testing and validating the actual product.

Verification is the static testing. Validation is the dynamic testing.

It does not include the execution of the code. It includes the execution of the code.

Methods used in verification are reviews, walkthroughs, inspections Methods used in validation are Black Box Testing, White Box Testing
and desk-checking. and non-functional testing.

It checks whether the software conforms to specifications or not. It checks whether the software meets the requirements and
expectations of a customer or not.

It can find the bugs in the early stage of the development. It can only find the bugs that could not be found by the verification
process.

The goal of verification is application and software architecture and The goal of validation is an actual product.
specification.

Quality assurance team does verification. Validation is executed on software code with the help of testing team.

It comes before validation. It comes after verification.

It consists of checking of documents/files and is performed by human. It consists of execution of program and is performed by computer.
• Software Testing Life Cycle (STLC)
It is a sequence of different activities performed during the software testing
process.
• Characteristics of STLC:
STLC is a fundamental part of Software Development Life Cycle (SDLC) but
STLC consists of only the testing phases.
STLC starts as soon as requirements are defined or software requirement
document is shared by stakeholders.
• STLC yields a step-by-step process to ensure quality software.
• Software Testing Life Cycle (STLC) Phases:
1. Requirement Analysis:
Requirement Analysis is the first step of Software Testing Life Cycle (STLC). In this
phase quality assurance team understands the requirements like what is to be
tested. If anything is missing or not understandable then quality assurance team
meets with the stakeholders to better understand the detail knowledge of
requirement.
Testing Life Cycle
• Test Planning:
Test Planning is most efficient phase of software testing life cycle where all
testing plans are defined. In this phase manager of the testing team
calculates estimated effort and cost for the testing work. This phase gets
started once the requirement gathering phase is completed.
• Test Case Development:
The test case development phase gets started once the test planning phase
is completed. In this phase testing team note down the detailed test cases.
Testing team also prepare the required test data for the testing. When the
test cases are prepared then they are reviewed by quality assurance team.
• Test Environment Setup:
Test environment setup is the vital part of the STLC. Basically test
environment decides the conditions on which software is tested. This is
independent activity and can be started along with test case development.
In this process the testing team is not involved. either the developer or the
customer creates the testing environment.
• Test Execution:
After the test case development and test environment setup test execution
phase gets started. In this phase testing team start executing test cases based
on prepared test cases in the earlier step.
• Test Closure:
This is the last stage of STLC in which the process of testing is analyzed.
• Testing metrics and Measurements
• Software Measurement:
A measurement is a manifestation of the size, quantity, amount or
dimension of a particular attribute of a product or process.
Software measurement is a titrate impute of a characteristic of a
software product or the software process. It is an authority within
software engineering. The software measurement process is defined
and governed by ISO Standard.

• Need of Software Measurement:


Create the quality of the current product or process.
Anticipate future qualities of the product or process.
Enhance the quality of a product or process.
Regulate the state of the project in relation to budget and schedule.
• Software Metrics:
A metric is a measurement of the level that any impute belongs to a system
product or process. There are 4 functions related to software metrics:
Planning
Organizing
Controlling
Improving

Characteristics of software Metrics:


• Quantitative:
Metrics must possess quantitative nature.It means metrics can be expressed in
values.
• Understandable:
Metric computation should be easily understood ,the method of computing metric
should be clearly defined.
• Characteristics of software Metrics Continue......
• Applicability:
Metrics should be applicable in the initial phases of development of the
software.
• Repeatable:
The metric values should be same when measured repeatedly and consistent
in nature.
• Economical:
Computation of metrics should be economical.
• Language Independent:
Metrics should not depend on any programming language.
• Classification of Software Metrics:
There are 3 types of software metrics:
• Product Metrics:
Product metrics are used to evaluate the state of the product, tracing risks
and undercovering prospective problem areas. The ability of team to control
quality is evaluated.
• Process Metrics:
Process metrics pay particular attention on enhancing the long term process
of the team or organization.
• Project Metrics:
The project matrix describes the project characteristic and execution process.
• Number of software developer
• Staffing pattern over the life cycle of software
• Cost and schedule
• Productivity
Metrics Life Cycle
• Examples Of Software Testing Metrics
• Let’s take an example to calculate various test metrics used in software test
reports:
• Below is the table format for the data retrieved from the Test Analyst who is
actually involved in testing:
• Definitions and Formulas for Calculating Metrics:
#1) %ge Test cases Executed: This metric is used to obtain the execution status
of the test cases in terms of %ge.
%ge Test cases Executed = (No. of Test cases executed / Total no. of Test cases
written) * 100.
So, from the above data,
%ge Test cases Executed = (65 / 100) * 100 = 65%

#2) %ge Test cases not executed: This metric is used to obtain the pending
execution status of the test cases in terms of %ge.
%ge Test cases not executed = (No. of Test cases not executed / Total no. of
Test cases written) * 100.
So, from the above data,
%ge Test cases Blocked = (35 / 100) * 100 = 35%
#3) %ge Test cases Passed: This metric is used to obtain the Pass %ge of the
executed test cases.
%ge Test cases Passed = (No. of Test cases Passed / Total no. of Test cases
Executed) * 100.
So, from the above data,
%ge Test cases Passed = (30 / 65) * 100 = 46%
#4) %ge Test cases Failed: This metric is used to obtain the Fail %ge of the
executed test cases.
%ge Test cases Failed = (No. of Test cases Failed / Total no. of Test cases
Executed) * 100.
So, from the above data,
%ge Test cases Passed = (26 / 65) * 100 = 40%
#5) %ge Test cases Blocked: This metric is used to obtain the blocked %ge of
the executed test cases. A detailed report can be submitted by specifying the
actual reason for blocking the test cases.
%ge Test cases Blocked = (No. of Test cases Blocked / Total no. of Test cases
Executed) * 100.
So, from the above data,
%ge Test cases Blocked = (9 / 65) * 100 = 14%

#6) Defect Density = No. of Defects identified / size


(Here “Size” is considered a requirement. Hence here the Defect Density is
calculated as a number of defects identified per requirement. Similarly, Defect
Density can be calculated as a number of Defects identified per 100 lines of
code [OR] No. of defects identified per module, etc.)
So, from the above data,
Defect Density = (30 / 5) = 6
#7) Defect Removal Efficiency (DRE) = (No. of Defects found during QA testing / (No. of
Defects found during QA testing +No. of Defects found by End-user)) * 100
DRE is used to identify the test effectiveness of the system.
Suppose, During Development & QA testing, we have identified 100 defects.
After the QA testing, during Alpha & Beta testing, the end-user / client identified 40
defects, which could have been identified during the QA testing phase.
Now, The DRE will be calculated as,
DRE = [100 / (100 + 40)] * 100 = [100 /140] * 100 = 71%

#8) Defect Leakage: Defect Leakage is the Metric which is used to identify the
efficiency of the QA testing i.e., how many defects are missed/slipped during the QA
testing.
Defect Leakage = (No. of Defects found in UAT / No. of Defects found in QA testing.) * 100
Suppose, During Development & QA testing, we have identified 100 defects.
After the QA testing, during Alpha & Beta testing, end-user / client identified 40 defects,
which could have been identified during QA testing phase.
Defect Leakage = (40 /100) * 100 = 40%
#9) Defects by Priority: This metric is used to identify the no. of defects identified based
on the Severity / Priority of the defect which is used to decide the quality of the
software.
• %ge Critical Defects = No. of Critical Defects identified / Total no. of Defects identified *
100
From the data available in the above table,
%ge Critical Defects = 6/ 30 * 100 = 20%
• %ge High Defects = No. of High Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge High Defects = 10/ 30 * 100 = 33.33%
• %ge Medium Defects = No. of Medium Defects identified / Total no. of Defects
identified * 100
From the data available in the above table,
%ge Medium Defects = 6/ 30 * 100 = 20%
• %ge Low Defects = No. of Low Defects identified / Total no. of Defects identified * 100
From the data available in the above table,
%ge Low Defects = 8/ 30 * 100 = 27%

You might also like