Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
29 views

Software Testing Unit-4 Notes

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Software Testing Unit-4 Notes

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Software Testing & Quality Assurance

Unit: - 4

Testing and Test case design Techniques

Dynamic Testing
Dynamic testing refers to analysing code’s dynamic behaviour in the software.
In this type of testing, you have to give input and get output as per the
expectation through executing a test case. You can run the test cases manually
or through an automation process, and the software code must be compiled and
run for this.
The main purpose of dynamic testing is to validate the software and ensure it
works properly without any faults after the installation. In a snapshot, you can
say that dynamic testing assures the overall functionality and performance of
the application. Also, it should be stable and consistent.

Characteristics of Dynamic Testing


 Execution of code: The software’s code needs to compile and run on the test
environment. It should be error-free.
 Execution of test cases on the running system: First, identify the features that
need to be tested. Then you need to execute the test cases in the test
environment. The test cases must be prepared in the early stage of dynamic
testing.
 Inputs are provided during the execution: It is necessary to execute the code
with the required input per the end-users specifications.
 Observing the output and behavior of the system: You need to analyze the
actual output after the test execution and compare them to the expected output.
This output will decide the behavior of the system. If they match, then the test
will pass. Otherwise, you need to consider the test as a failure. So, report it as a
bug.
Types of Dynamic Testing
1. Functional Testing
Functional testing checks the functionality of an application as per the
requirement specifications. Each module needs to be tested by giving an input,
assuming an output, verify the actual result with the expected one. Further, this
testing divides into four types –

 Unit Testing: It tests the code’s accuracy and validates every software module
component. It determines that every component or unit can work independently.
 Integration Testing: It integrates or combines each component and tests the
data flow between them. It ensures that the components work together and
interact well.
 System Testing: It makes to test the entire system. So it’s also known as end-
to-end testing. You should work through all the modules and check the features
so that the product fits the business requirements.
 User Acceptance Testing (UAT): Customers perform this test just before
releasing the software in the market to check the system meets the real user’s
conditions and business specifications.

2. Non-Functional Testing:

Non-functional testing implies checking the quality of the software. That


implies testing whether the software meets the end users’ requirements. It
expands the product’s usability, maintainability, effectiveness, and
performance. Hence it reduces the manufacturing risk for the non-functional
components.

 Performance Testing: In this testing, we have to check how the software can
perform in different conditions. Three types of conditions are most considerable
to do the performance testing. They are-
o Speed Testing: The time requires loading a web page with all components-
texts, images, videos, etc.
o Load Testing: Test the software stability when users increase gradually. That
means, by this test, you can check the system’s performance under variable
loads.
o Stress Testing: It sets a limit on which the system breaks due to a sudden
increase in users’ number.
 Security Testing: Security testing reveals the vulnerabilities and threats of a
system. Also, it ensures that the system is protected from unauthorized access,
data leakages, attacks, and other issues. Then fix the issues before deployment.
 Usability Testing: This test checks how easily an end user can handle a
software/system/application. Additionally, it will check the app’s flexibility and
capability to reach the user’s requirements.

Advantages and Disadvantages of


Dynamic Testing
 Advantages

1. Detects defects early: The dynamic test cases need to prepare in the
early phases of the testing life cycle with inputs and expected outputs.
This way, it will find the defects whenever the test cases are executed
without wasting time.
2. Provides confidence in the system: Dynamic testing covers major
functionalities of a system and component and ensures they are working
correctly. Also, it includes proper communication between several
modules, increasing the quality of the software.
3. Helps in identifying the system’s performance: Non-functional testing
is an essential category of dynamic testing that helps to identify the
system’s performance. This test includes checking an application’s
response, speed, stress, and loading time in real user conditions and
environments.

 Disadvantages

1. Time-consuming: This testing requires many resources, including test


cases. Thus it becomes time-consuming.
2. Required skilled resources: You need to hire well-skilled human
resources, which means software testers. They must have in-depth coding
knowledge that requires unit and integration testing. So, here you need to
spend lots of money.
3. Costly: Dynamic testing occurs in the later stages of software
development. It is done after the completion of the coding phase. So,
issues fixed in this testing automatically increase the cost of the project or
product.

Review work products:


Think of software review as giving your homework a thorough check before
turning it in. You review your work to find mistakes, make improvements, and
get that A+ grade. In the software world, it’s about taking a magnifying glass to
the software to catch any problems and enhance its performance. The goal is to
ensure that the software runs smoothly and meets the expectations of its users,
just like you’d want your homework to impress your teacher.

Reasons why Reviews Are Important:


 First, they improve the quality of the software product or component by
identifying and resolving defects early in the development process, preventing
issues from reaching end-users, leading to enhanced customer satisfaction and
reduced support requests.
 Second, reviews reduce the overall cost and time of the software development
project by catching defects early, saving time, and resources that would
otherwise be needed to address issues in later stages of the development cycle.
 Third, they provide an opportunity for knowledge sharing and collaboration
among team members, promoting best practices and fostering improved
teamwork and productivity.
 Fourth, they ensure compliance with regulatory and industry standards by
identifying issues that may be in violation of regulatory requirements or
industry standards, allowing for corrective action to be taken before the product
is released.

Planning:
 Defining the scope
 Estimating effort and timeframe
 Identifying review characteristics
 Selecting the people to participate in the review and allocating roles
 Defining entry and exit criteria

Initiating the review:
 Distributing the work product and related material
 Explaining the scope, objectives, process, roles, and work products to the
participants
 Answering any questions that participants may have about the review

Individual review:
 Reviewing all or part of the work product
 Noting potential defects, recommendations, and questions

Issue communication and analysis:


 Communicating identified potential defects
 Analysing potential defects and assigning ownership and status to them
 Evaluating and documenting quality characteristics
 Evaluating the review findings against the exit criteria to make a review
decision

Fixing and reporting:


 Creating defect reports for findings that require changes to a work product
 Fixing defects found in the work product reviewed
 Communicating defects to the appropriate person or team and recording the
updated status of defects
 Gathering metrics
 Checking those exit criteria are met
 Accepting the work product when the exit criteria are reached
In a formal review, there are several roles and responsibilities that are typically
assigned to participants.

Roles and Responsibilities:


 Moderator/Chairperson: Responsible for conducting the review meeting. It
ensures that the review is conducted according to the review process, and
ensures that the review objectives are met.
 Author: The person who wrote the work product being reviewed.
 Reviewer: A person who is responsible for reading and analysing the work
product being reviewed and identifying any defects, issues, or potential
improvements.
 Recorder/Scribe: Responsible for recording the minutes of the review meeting
and documenting the issues raised, the decisions made, and any action items
assigned.
 Manager: A person who is responsible for managing the review process and
ensuring that the review is conducted effectively.
 Quality Assurance Representative: Responsible for ensuring that the review
process is in compliance with organizational quality standards and procedures.
 Technical Expert: A person who has expertise in the subject matter being
reviewed. Also, the person who can provide technical advice and guidance
during the review process.
Identify Test Objectives, Test Specifications
and Test Design

What is Test Design?


Software test design is the process of creating a plan or strategy to test an entire
software application, including all of its features and functions. Test design
aims to identify software defects early in the development cycle before the
product is released to users.A team of testers is responsible for designing all
aspects of a test, including:

 Determining what data will be used in the test case design


 Reporting all the software aspects in either diagram, table, or else
 Predicting all potential errors and mistakes based on previous versions
and the team’s knowledge

Test design is a critical part of every software launch, and it plays as a guarantee
that fewer bugs and errors are encountered.

Test Objectives:
Identification of Bugs
The first and foremost goal of software testing is to improve software quality by
identifying bugs in the application. Irrespective of the size or complexity of the
application, tests are conducted on a continuous basis to unearth issues and
build a high-quality product.

Tests that require human intervention (or cannot be automated) are conducted
using manual testing. On the other hand, test automation frameworks (or tools)
are leveraged to run tests in a CI/CD pipeline. Such an approach minimizes the
developer feedback loop and continually improves the software quality at every
stage of application development.

Improvement of software quality


As mentioned in the earlier point, the primary intent of running software tests is
to locate potential bugs in the application. These bugs are then prioritized on a
severity basis, post which they are assigned to the respective developers for
fixation.

Enhancement of Security
Security tests like penetration tests, compliance tests, network security tests, etc.
can be leveraged to unearth security vulnerabilities in the application. In many
tests, internal testers don the hats of the hacker to identify security loopholes.
Network security testing ensures that any form of data is always encrypted &
secured, whether it is in motion or at rest!

Hence, security testing must be leveraged to validate different aspects of the


application, especially from a security standpoint. As security testing is
extremely important, it is wise to on-board an experienced QA testing services
company that has an ample amount of expertise in security testing!

Enhancement of scalability and reliability


Many of us would have witnessed outages [2] when shopping on popular e-
commerce platforms, particularly when there is significantly high traffic on the
website

The main reason for such an experience could be unprecedented load on the
website, causing hiccups to the end-users of the website (or application). This is
where load testing can be valuable, as it helps in validating the application’s
behaviour when it is subjected to different kinds of load. Load testing falls in
the category of performance testing, a non-functional type of testing that helps
in improving the scalability, reliability, responsiveness, and speed aspects of the
application.

Some of the major forms of performance testing are:

 Load testing
 Stress testing
 Volume testing
 Soak testing
 Endurance testing
A highly secure, reliable, and scalable application can have a long-lasting
impact on the end-user experience. Performance testing helps in ensuring that
your application is built for scale, thereby ensuring that there are no outages at
times when there is a high load on the application!

Test Specifications:
Software testing standards are a set of guidelines and principles used to define
and guide the process of software testing. These standards outline the strategies,
procedures, methodologies, and best practices for testing software to ensure its
quality, functionality, and performance. They also provide a foundation for
creating and evaluating software testing processes, techniques, and tools. Some
common software testing standards include ISO/IEC 29119, IEEE 829, and
ISO/IEC 9126. Adhering to these standards can help organizations improve the
effectiveness and efficiency of their software testing processes and ultimately
deliver high-quality software products to customers.
Here is a brief guide to some of the key elements of software testing standards:

 Test planning: This is the process of defining the scope, objectives, and
approach to software testing.
 Test case development: This is the process of creating test cases that
will be used to test software.
 Test execution: This is the process of running test cases against software.
 Test analysis: This is the process of reviewing test results and identifying
defects.
 Test reporting: This is the process of communicating test results to
stakeholders.

Design Test Cases:

What is a Test Case?


A test case is a defined format for software testing required to check if a
particular application/software is working or not. A test case consists of a
certain set of conditions that need to be checked to test an application or
software i.e. in more simple terms when conditions are checked it checks if the
resultant output meets with the expected output or not. A test case consists of
various parameters such as ID, condition, steps, input, expected result, result,
status, and remarks.

Parameters of a Test Case:

 Module Name: Subject or title that defines the functionality of the test.
 Test Case Id: A unique identifier assigned to every single condition in a
test case.
 Tester Name: The name of the person who would be carrying out the test.
 Test scenario: The test scenario provides a brief description to the tester,
as in providing a small overview to know about what needs to be performed
and the small features, and components of the test.
 Test Case Description: The condition required to be checked for a given
software. for eg. Check if only numbers validation is working or not for an
age input box.
 Test Steps: Steps to be performed for the checking of the condition.
 Prerequisite: The conditions required to be fulfilled before the start of the
test process.
 Test Priority: As the name suggests gives priority to the test cases that had
to be performed first, or are more important and that could be performed
later.
 Test Data: The inputs to be taken while checking for the conditions.
 Test Expected Result: The output which should be expected at the end of
the test.
 Test parameters: Parameters assigned to a particular test case.
 Actual Result: The output that is displayed at the end.
 Environment Information: The environment in which the test is being
performed, such as the operating system, security information, the software
name, software version, etc.
 Status: The status of tests such as pass, fail, NA, etc.
 Comments: Remarks on the test regarding the test for the betterment of the
software.

When do we Write Test Cases?


Test cases are written in different situations:
 Before development: Test cases could be written before the actual coding
as that would help to identify the requirement of the product/software and
carry out the test later when the product/software gets developed.
 After development: Test cases are also written directly after coming up
with a product/software or after developing the feature but before the
launching of a product/software as needed to test the working of that
particular feature.
 During development: Test cases are sometimes written during the
development time, parallel. so whenever a part of the module/software gets
developed it gets tested as well.

Black Box Test Case Design Techniques:

Black-box testing is a type of software testing in which the tester is not


concerned with the software’s internal knowledge or implementation details
but rather focuses on validating the functionality based on the provided
specifications or requirements.

Types of Black Box Testing :-

The following are the several categories of black box testing:


1. Functional Testing
2. Regression Testing
3. Non-functional Testing (NFT)
Before we move in depth of the Black box testing do you known that there are
many different type of testing used in industry and some automation testing
tools are there which automate the most of testing so if you wish to learn the
latest industry level tools then you check-out our manual to automation testing
course in which you will learn all these concept and tools

Functional Testing
 Functional testing is defined as a type of testing that verifies that each
function of the software application works in conformance with the
requirement and specification.
 This testing is not concerned with the source code of the application. Each
functionality of the software application is tested by providing appropriate
test input, expecting the output, and comparing the actual output with the
expected output.
 This testing focuses on checking the user interface, APIs, database,
security, client or server application, and functionality of the Application
Under Test. Functional testing can be manual or automated. It determines
the system’s software functional requirements.

Regression Testing
 Regression Testing is the process of testing the modified parts of the code
and the parts that might get affected due to the modifications to ensure that
no new errors have been introduced in the software after the modifications
have been made.
 Regression means the return of something and in the software field, it
refers to the return of a bug. It ensures that the newly added code is
compatible with the existing code.
 In other words, a new software update has no impact on the functionality of
the software. This is carried out after a system maintenance operation and
upgrades.

Non-functional Testing
 Non-functional testing is a software testing technique that checks the non-
functional attributes of the system.
 Non-functional testing is defined as a type of software testing to check non-
functional aspects of a software application.
 It is designed to test the readiness of a system as per non-functional
parameters which are never addressed by functional testing.
 Non-functional testing is as important as functional testing.
 Non-functional testing is also known as NFT. This testing is not functional
testing of software. It focuses on the software’s performance, usability, and
scalability.
Advantages of Black Box Testing:
 The tester does not need to have more functional knowledge or
programming skills to implement the Black Box Testing.
 It is efficient for implementing the tests in the larger system.
 Tests are executed from the user’s or client’s point of view.
 Test cases are easily reproducible.
 It is used to find the ambiguity and contradictions in the functional
specifications.

Disadvantages of Black Box Testing:

 There is a possibility of repeating the same tests while implementing the


testing process.
 Without clear functional specifications, test cases are difficult to
implement.
 It is difficult to execute the test cases because of complex inputs at different
stages of testing.
 Sometimes, the reason for the test failure cannot be detected.
 Some programs in the application are not tested.
 It does not reveal the errors in the control structure.
 Working with a large sample space of inputs can be exhaustive and
consumes a lot of time.

White Box Test Case Design Techniques


White box testing techniques analyse the internal structures the used data
structures, internal design, code structure, and the working of the software
rather than just the functionality as in black box testing. It is also called glass
box testing clear box testing or structural testing. White Box Testing is also
known as transparent testing or open box testing.
Types Of White Box Testing
White box testing can be done for different purposes. The three main types
are:
1. Unit Testing
2. Integration Testing
3. Regression Testing

Unit Testing
 Checks if each part or function of the application works correctly.
 Ensures the application meets design requirements during development.
Integration Testing
 Examines how different parts of the application work together.
 Done after unit testing to make sure components work well both alone and
together.
Regression Testing
 Verifies that changes or updates don’t break existing functionality.
 Ensures the application still passes all existing tests after updates .

White Box Testing Techniques:

1. Statement Coverage
In this technique, the aim is to traverse all statements at least once. Hence,
each line of code is tested. In the case of a flowchart, every node must be
traversed at least once. Since all lines of code are covered, it helps in pointing
out faulty code.

2. Branch Coverage
In this technique, test cases are designed so that each branch from all decision
points is traversed at least once. In a flowchart, all edges must be traversed at
least once.

3. Condition Coverage
In this technique, all individual conditions must be covered as shown in the
following example:
 READ X, Y
 IF(X == 0 || Y == 0)
 PRINT ‘0’
 #TC1 – X = 0, Y = 55
 #TC2 – X = 5, Y = 0
4. Multiple Condition Coverage
In this technique, all the possible combinations of the possible outcomes of
conditions are tested at least once. Let’s consider the following example:
 READ X, Y
 IF(X == 0 || Y == 0)
 PRINT ‘0’
 #TC1: X = 0, Y = 0
 #TC2: X = 0, Y = 5
 #TC3: X = 55, Y = 0
 #TC4: X = 55, Y = 5

5. Basis Path Testing

In this technique, control flow graphs are made from code or flowchart and
then Cyclomatic complexity is calculated which defines the number of
independent paths so that the minimal number of test cases can be designed for
each independent path. Steps:
 Make the corresponding control flow graph
 Calculate the cyclomatic complexity
 Find the independent paths
 Design test cases corresponding to each independent path
 V(G) = P + 1, where P is the number of predicate nodes in the flow graph
 V(G) = E – N + 2, where E is the number of edges and N is the total
number of nodes
 V(G) = Number of non-overlapping regions in the graph
 #P1: 1 – 2 – 4 – 7 – 8
 #P2: 1 – 2 – 3 – 5 – 7 – 8
 #P3: 1 – 2 – 3 – 6 – 7 – 8
 #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
6. Loop Testing

Loops are widely used and these are fundamental to many algorithms hence,
their testing is very important. Errors often occur at the beginnings and ends of
loops.
 Simple loops: For simple loops of size n, test cases are designed that:
1. Skip the loop entirely
2. Only one pass through the loop
3. 2 passes
4. m passes, where m < n
5. n-1 ans n+1 passes
 Nested loops: For nested loops, all the loops are set to their minimum
count, and we start from the innermost loop. Simple loop tests are
conducted for the innermost loop and this is worked outwards till all the
loops have been tested.
 Concatenated loops: Independent loops, one after another. Simple loop
tests are applied for each. If they’re not independent, treat them like
nesting.

Features of White Box Testing:

1. Code coverage analysis: White box testing helps to analyze the code
coverage of an application, which helps to identify the areas of the code
that are not being tested.
2. Access to the source code: White box testing requires access to the
application’s source code, which makes it possible to test individual
functions, methods, and modules.
3. Knowledge of programming languages: Testers performing white box
testing must have knowledge of programming languages like Java, C++,
Python, and PHP to understand the code structure and write tests.
4. Identifying logical errors: White box testing helps to identify logical
errors in the code, such as infinite loops or incorrect conditional statements.
5. Integration testing: White box testing is useful for integration testing, as it
allows testers to verify that the different components of an application are
working together as expected.
6. Unit testing: White box testing is also used for unit testing, which involves
testing individual units of code to ensure that they are working correctly.
7. Optimization of code: White box testing can help to optimize the code by
identifying any performance issues, redundant code, or other areas that can
be improved.
8. Security testing: White box testing can also be used for security testing, as
it allows testers to identify any vulnerabilities in the application’s code.
9. Verification of Design: It verifies that the software’s internal design is
implemented in accordance with the designated design documents.
10.Check for Accurate Code: It verifies that the code operates in accordance
with the guidelines and specifications.
11.Determining the Dead Code: It finds and remove any code that isn’t used
when the programme is running normally (dead code).

Advantages of White Box Testing:

1. Thorough Testing: White box testing is thorough as the entire code and
structures are tested.
2. Code Optimization: It results in the optimization of code removing errors
and helps in removing extra lines of code.
3. Early Detection of Defects: It can start at an earlier stage as it doesn’t
require any interface as in the case of black box testing.
4. Integration with SDLC: White box testing can be easily started in
Software Development Life Cycle.
5. Detection of Complex Defects: Testers can identify defects that cannot be
detected through other testing techniques.
6. Comprehensive Test Cases: Testers can create more comprehensive and
effective test cases that cover all code paths.
7. Testers can ensure that the code meets coding standards and is optimized
for performance.

Disadvantages of White Box Testing:

1. Programming Knowledge and Source Code Access: Testers need to have


programming knowledge and access to the source code to perform tests.
2. Overemphasis on Internal Workings: Testers may focus too much on the
internal workings of the software and may miss external issues.
3. Bias in Testing: Testers may have a biased view of the software since they
are familiar with its internal workings.
4. Test Case Overhead: Redesigning code and rewriting code needs test
cases to be written again.
5. Dependency on Tester Expertise: Testers are required to have in-depth
knowledge of the code and programming language as opposed to black-box
testing.
6. Inability to Detect Missing Functionalities: Missing functionalities
cannot be detected as the code that exists is tested.
7. Increased Production Errors: High chances of errors in production.

Experience-based Test Case Design


Techniques:

What is Experience Based Testing?


Experience-based testing isn’t your typical testing method—it’s a dynamic
approach that relies on a tester’s intuition, skills, and past experiences. This
technique transforms these insights into concrete test scenarios, benefiting from
the combined expertise of developers, testers, and users. Through collaborative
efforts, this approach shapes effective tests that truly count.
Experience-based testing truly shines through its ability to uncover test
scenarios that might slip through the cracks of other rigid methodologies. While
structured methods have their merits, experience-based testing adds a layer of
creativity and resourcefulness to the mix. This approach could be your project’s
game-changer to stand out in a testing landscape where thoroughness is key to
success.

Advantages of Experience-Based Testing:

 Adaptability to Sparse Documentation: Experience-based testing


shines when dealing with systems that lack detailed documentation. Its
flexibility makes it a viable alternative in such scenarios.

 Efficiency under Time Constraints: When time is of the essence and


testing activities face tight schedules, This also proves its effectiveness by
ensuring thorough testing within these limitations.

 Harnessing Domain Expertise: The true power of experience-based


testing lies in tapping into the collective wisdom of domain and
technology experts associated with the software. This expertise can be
drawn from various sources, including business analysts, customers, and
clients, enriching the testing process.
 Early Developer Feedback: By offering timely feedback to developers,
this testing catalyzes swift issue resolution, contributing to smoother
development cycles.

 Enhanced Familiarity with Software: Experience-based testing


empowers testing teams to become intimately familiar with the product’s
intricacies as the software evolves.

 Ideal for Addressing Operational Failures: Experience-based testing


excels when analysing and rectifying operational failures, showcasing its
effectiveness in addressing critical issues.

 Diverse Testing Techniques at Your Disposal: Embracing experience-


based testing opens the door to many testing techniques, allowing for
tailored approaches based on project nuances.

 Efficient Exploratory Testing Initiation: Experience-based testing


reduces the need for extensive predefined test plans, enabling testing to
kick off swiftly in the early stages of development.

 Filling the Gaps in Automated Testing: Experience-based testing steps


in where automated testing falls short, probing aspects of the software
that resist effective automation.

Disadvantages of Experience-Based
Testing:

 Not Ideal for Detail-Centric Systems: In systems that demand


meticulous test documentation, experience-based testing might not be the
best fit due to its reliance on testers’ intuition.

 Repeatability Challenges: Consistently replicating test outcomes can be


challenging due to the inherent variability of experience-based testing.

 Complex Coverage Assessment: Precisely measuring test coverage


becomes more intricate with experience-based testing, posing a challenge
to ensure comprehensive testing.
 Automation Compatibility: Experience-based tests are less conducive to
subsequent automation efforts, limiting their integration into automated
testing pipelines.

 Quality Tied to Tester Expertise: The quality of testing outcomes is


directly linked to individual testers’ expertise, which can vary widely,
introducing an element of unpredictability.

Case Study #1: Test Cases for an IVR System:


What is the IVR System?
Interactive Voice Response (IVR) is an automated technology that allows
interaction with a human being (caller) with the help of voice input and DTMF
input (Dual-tone multi-frequency) using the keyboard.

IVR System Architecture


During the end-to-end flow of IVR testing, multiple components are involved in
the mobile phone, landline, DTMF inputs, voice input, etc.

The below diagram shows the architecture of the IVR system:

Technology Used in IVR System


The pointers given below explain to you the technology that is used in the
IVR System.
 Anyone can question how a phone can be connected to the computer
system. And the answer is – using DTMF. Using the tone of every key on
a telephone keypad, the phones are connected to a computer system.
These are known as “Dual-tone multi-frequency (DTMF)” signals.
DTMF tones are entered using a telephone keypad.
 There is another way to communicate which is nothing but
using “Speech Recognition”. Here, the caller provides input to the IVR
system using his clear voice so that IVR can interpret the input correctly
and provide accurate information.
 IVR system provides an appropriate voice response to the caller’s DTMF
input that is called an “Audio Response Unit (ARU)”. It is a device that
provides information to the caller based on the input received from the
caller and the information received from the database.
 “Automatic Call Distributor (ACD)” is a technology that distributes
customer calls, in the order they arrive, to the next available appropriate
agent.
 IVR application is a tree structure just like the folders and files structure
in the Windows system. This structure in the IVR is called a call flow
diagram.
 Text-to-Speech (TTS) is a system that converts normal language text
into speech. TTS is a computer generator speech that speaks information
like news, email, etc.

To test an IVR application, the following features need to be


considered:

1) Verification Process:

Due to emerging technology, there is always a chance of fraud to happen. So it


is imperative to test if the IVR application is free from any vulnerabilities. IVR
application always verifies the caller by asking security questions like “Date of
Birth”, 4-digit PIN code number, etc. This verification process varies based on
the IVR application that is in use.

2) Call Transfer or Call Routing:

In the IVR system, it is very important to test whether the call is transferred to
the correct agent or not. There are different agents available for different areas
and they are experts in their area only.

3) Dual-tone Multi-frequency (DTMF) Input:

It is the most significant method to provide input to the IVR system. DTMF
inputs are given using the digits 0 to 9 and sometimes * and # from the phone
keypad. For every menu and sub-menu, a caller has to provide different DTMF
inputs and it is a tedious task to test every input in each menu and sub-menu.

4) Retry option in the IVR System:

Many a time it happens that the caller is not able to recognize or does not follow
the message or prompt played by the IVR system. Then the caller becomes
silent as he is not sure about the options being given by the IVR application.

5) Accent and Pronunciation:


As all the IVR prompts are pre-recorded in the voice, these prompts should be
clear and audible to the caller. Also, the caller’s accent and language
pronunciation should be accurate so that the automated IVR system can
recognize the input from the caller.

Execute Test Cases


Test Execution is the process of executing the tests written by the tester to
check whether the developed code or functions or modules are providing the
expected result as per the client requirement or business requirement. Test
Execution comes under one of the phases of the Software Testing Life Cycle
(STLC).
In the test execution process, the tester will usually write or execute a certain
number of test cases, and test scripts or do automated testing. If it creates any
errors, then it will be informed to the respective development team to correct
the issues in the code. If the text execution process shows successful results,
then it will be ready for the deployment phase after the proper setup for the
deployment environment.

Importance of Test Execution:


 The project runs efficiently: Test execution ensures that the project runs
smoothly and efficiently.
 Application competency: It also helps to make sure the application’s
competency in the global market.
 Requirements are correctly collected: Test executions make sure that the
requirements are collected correctly and incorporated correctly in design
and architecture.
 Application built in accordance with requirements: It also checks
whether the software application is built in accordance with the
requirements or not.

Test Execution Process:

The test Execution technique consists of three different phases which will be
carried out to process the test result and ensure the correctness of the required
results. In each phase, various activities or work will be carried out by various
team members. The three main phases of test execution are the creation of test
cases, test case execution, and validation of test results. Let us discuss each
phase.
1. Creation of Test Cases: The first phase is to create suitable test cases for
each module or function. Here, the tester with good domain knowledge must
be required to create suitable test cases. It is always preferable to create simple
test cases and the creation of test cases should not be delayed else it will cause
excess time to release the product. The created test cases should not be
repeated again. It should cover all the possible scenarios raised in the
application.
2. Test Cases Execution: After test cases have been created, execution of test
cases will take place. Here, the Quality Analyst team will either do automated
or manual testing depending upon the test case scenario. It is always preferable
to do both automated as well as manual testing to have 100% assurance of
correctness. The selection of testing tools is also important to execute the test
cases.
3. Validating Test Results: After executing the test cases, note down the
results of each test case in a separate file or report. Check whether the
executed test cases achieved the expected result and record the time required
to complete each test case i.e., measure the performance of each test case. If
any of the test cases is failed or not satisfied the condition, then report it to the
development team for validating the code.

Generate Incident Report / Anomaly Report:


An incident report in software testing is a formal document that describes an
unexpected event, issue, or problem encountered during the testing process.
This report is typically created by testers, QA engineers, or other relevant
stakeholders when they discover defects, anomalies, or unexpected
behaviors in the software being tested.

Types of Incidents in Software Testing :


In software testing, different types of errors and defects are identified by the
testers to ensure the functionality of the software application. During the
testing process, different types of incidents and issues are encountered,
which is important to be understood by the testers to have its resolution.
Here are some types of incidents in software testing:

 Functional issues: This issue is related to the functionality of the


software application, which occurs due to the failure of the software to
perform as per the SRS—for example, missing and accurate data,
malfunctioning features, and incorrect calculations.

 Performance issues: It occurs when the performance of the software


application does not address the SRS. Some example of those incidents
includes speed, responsiveness, and scalability.

 Compatibility issues: Such issue occurs when any software application


fails to work and integrate into different software components and
environment. Examples include differences in hardware configuration,
OS, browsers, and versions.

 Compatibility issues: Usability issues relate to difficulties or challenges


end-users face in effectively and efficiently utilizing the software
product. For example, such incidents are around the user interface, user
experience, and others.

 Usability issues: Security issue: These are the vulnerabilities in the


software application that interfere with the reliability of the software
application. Some security issues are data breaches, unauthorized access,
and inadequate encryption.
What is an Incident Report?
The test incident report is a document generated after the software testing
process. Its purpose is to ensure transparency among team members by
reporting and logging various incidents and defects. This report addresses
these issues by facilitating assessment, investigation, and resolution. During
the planning phase, the objective is to report all incidents that require
thorough evaluation and investigation while meeting the predetermined
output criteria.

Why Need Incident Reports in


Software Testing?
The software testers develop the report after completing the software
projects and help the team members specifically. Such incident reports
improve the communication between the team members and help them to
address measures taken to develop, test, and evaluate the software
application. Some of those measures include development methodologies
like Agile, coding standards, testing approaches like unit testing , and others.

In the same way, a test incident report allows the team to document and
classify different incidents that affect the behaviour, performance, and
functionality of the software application. Such incidents include
programming errors, compatibility issues, performance bottlenecks, and
security vulnerabilities.

By logging incident reports with accurate information, details, and evidence,


this report enables testers to convey information about tracked incidents to
the incident management team. Some examples of that information include
descriptions of incidents, log files and error messages, test case information,
and others.
Such incident report allows testers to deliver information about tracked
incidents to the incident management team. This helps the team target
inappropriate behaviour incidents and undertake required actions to resolve
them. Now let us learn some of the specific benefits of creating incident
reports in software testing.

Benefits of Creating an Incident


Report
Below is a list of some of the benefits of incident reports in software testing.

 It conveys detailed information about the observed behavior of the


software application and the various defects tracked by the testing team.

 It provides specifics about failed tests, including when they occurred and
supporting evidence.

 It prioritizes defects and incidents, saving the team's time, effort, and
resources.

 It helps distinguish between defects and incidents.

 It offers transparency among team members and reduces communication


gaps between the team and other project stakeholders.

 It tracks, reports, categorizes, assigns, and manages defects and incidents


from their discovery until their final resolution.

Structure of an Incident Report


An incident report was created by international organizations such as IEEE
and ISO to define the standardized formats for various reports generated
during the Software Development Life Cycle. These formats are universally
recognized and accepted by software developers and testers. The IEEE std
829-1998 specifies the format for the test incident report as follows:
 Test incident report identifier: The first information in the report is the
test incident report identifier. It is a unique number generated by the
organization to provide an identity to the report and distinguish it from
others. The identifier helps identify the report's level and association with
specific software. It also enables tracking of the testing phase where the
incident initially occurred and facilitates effective incident resolution
through process improvement.

 Test incident report identifier: After assigning a unique identifier, the


team provides an overview or summary of the incident. This section
includes all the necessary information and details, focusing on how,
when, and where the incident occurred and how the team discovered it.
These details help resolve the incident and enhance the quality of the end
product. Other details covered in this section include:

 Test procedures, techniques, methodologies, etc., used for incident


discovery.

 Test logs that demonstrate the execution of various test cases and
procedures.

 Test case specifications illustrate the recreation of the incident.

 Methods employed to discover and resolve the incident.

 Any other significant detail reported by the testing team regarding


the incident.

 Incident description: In this section, the team provides additional


information about the incident, which is not covered in the summary. It
includes comprehensive and detailed information about the incident,
along with any supporting evidence that helps developers or the incident
management team understand the defects and incidents more effectively.
Some of the items included in this section are:

 Expected results/outputs.

 Inputs
 Actual results of the conducted tests.

 Testing procedures and their steps.

 Date and time of the executed tests.

 Levels of testing where the incident(s) were discovered.

 Impact: After assigning a unique identifier, the team provides an


overview or summary of the incident. This section includes all the
necessary information and details, focusing on how, when, and where the
incident occurred and how the team discovered it. These details help
resolve the incident and enhance the quality of the end product. Other
details covered in this section include:

Log the Defects


Defect logging is simply finding bugs and defects in the application that is
being tested. The testing team checks for potential issues and logs the defects
along with their status and priority before assigning it to the development team.
Once a defect is detected, defect tracking manages those defects.

Test Documentation Standards:


 Test Plan: This contains the detailed description about how the test
process shall proceed. This includes information like how much time
shall be allocated to testing, who will perform the test, what will be tested
and the quality to be attained at the end of testing process.

 Test Design Specification: This includes the test conditions to be


implemented and its outcome.
 Test Case Specification: Information about a specific data that needs to
be tested, based on previously gathered data.

 Test Procedure Specification: It specifies how exactly the tester is


expected to perform a test, the environment/set up needed to execute
tests.

 Test Item transmittal report: This is the report about the transfer of
tested items from one stage to the next.

 Test Log: Log contains the details of the executed tests, the sequence in
which they were executed and also the pass/fail report of tests.

 Test Incident report: Key points stating the reason of deviation in actual
and expected results.

 Test Summary report: A complete summary report stating the overall


test procedure and the outcome such as how efficiently the testing has
been performed, assessing the quality of the system, the time taken to test
a certain condition and whether any issues were there or not.

 Master Test Plan -This describes the overall test plan in detail at various
levels of testing.

 Level Test Plan -For each level of test process, its scope, approach and
the schedule needs to be defined for carrying out the testing activities.
The component to be tested, their features, and the risks involved in
implementing each test condition.

 Level Test Design -Test cases to be described in detail, the expected


results and the passing criteria of the test.
 Level Test case - The data to be used in executing the test cases
identified during the Level test design.

 Level Test Procedure - Detail regarding how the test cases are to be
executed, including the specified preconditions and steps to be followed.

 Level Test log - A complete log of information consisting of the relevant


details about the execution of tests.

 Anomaly Report - This report is to mark any issues, defect, test incident
or error report. This document is basically to note the failure report, and
solutions for such issues, if any.

 Level interim test status report - This document aims to record the
summary of results for each level of test.

 Level test report - Summarise results of the testing activities, evaluate


and recommend solutions for the test results.

 Master test report - A comprehensive report about the efficiency of


testing activities, to ascertain that the quality of test is up to the mark and
also analyse the anomaly report. After analysing the final report, one shall
be able to reach at a conclusion whether the system under test is robust
enough to serve the intended purpose.

Formal Methods of Testing:


Formal testing is a type of software testing in which the testing of the
software is done with proper planning and with proper documentation of
its test cases.

What is Formal Testing?


In formal testing, the degree of thoroughness and formality of test cases
depend upon the requirements of the project. Formal testing follows a
systematic process called Software Testing Life Cycle (STLC) .

Formal Testing Process

Formal testing follows a systematic approach known as STLC so


naturally, it contains all the steps involved in it.

Fig 1: Steps involved in Formal Testing

1. Requirement Analysis: In this phase, the testing team understands


the requirements like what is to be tested. If anything is missing or not
understandable then the testing team meets with the stakeholders to
better understand the detailed knowledge of the requirement.

2. Planning Tests: The proper planning for software testing involves the
formulation of a test objective, test strategy, and schedule. A plan helps
developers of the software in many ways like analysing the effort required
to validate the quality of the software, fixing a schedule for appropriate
testing of the software, etc. The planning of software testing involves
multiple steps such as:
 Product Analysis: To understand the software, for example, the GFG
application, first of all, we need to know everything that is to know
about the application itself after that we need to know about the users
who use the application and what are their expectations from the
application.

Defining Objectives: The objective of a test is to find as many problems


present in the software as possible before it gets released to the public so
that it can be bug-free and easy to use. The test objective can also be
chosen on the basis of memory usage, the application’s functionality, its
performance, etc.

 Deciding Test Criteria: Test criteria are situations according to which


the decision about the testing of the software can be made. It is of two
types:

Suspension: The suspension criteria tells that if the failure rate of the
testing of a specific software project reaches a particular threshold then
the testing of the software should be stopped temporarily until those errors
are fixed and the test result should be marked as failed.

Exit: The exit criteria tell that if the success rate of the testing of a specific
software project reaches a particular threshold then the testing of the
software should be stopped and the results of the testing should be
marked as passed.

 Strategizing the test: Strategizing involves choosing the scope of the


tests and the efforts that will be required in them. It consists of four
steps which are
Defining the scope of the test: Scope refers to the parts of the software
that will be allowed to be tested during the testing period. It may be the
front end of the application, the back end of the application, or maybe
something totally different.

Identifying the test type: The type of testing needs to be determined so


that we can perform the correct test at the correct time. For example, for
testing only the front or back end we will be doing unit testing for both of
them and when we will be testing the entire application we will be doing
integration and other tests.

Risk assessment: Identifying the risks associated with the project is also
an important task during planning. The developers do the feasibility
study of the project to find the best approach for developing the software
with the least amount of risk involved.

Selecting the testers: Choosing the right testers for the software is also
important. The skills belonging to the tester must qualify to support the
results produced by him/her during the test.

3. Documentation of Test Cases: Documentation of a test case can be


developed from design requirements, domain coverage, equivalence
partitioning, boundary testing, etc.
 Design requirement: It is the feature the developers are trying to
create using the software. The test cases can be documented on how
they meet the design requirements specified for the software under
development.
 Domain Coverage: It is a type of white box testing in which the
software under development is provided with a minimum number of
test inputs and its output is verified to look whether it accepts any
values that are out of bound or out of range to be considered as a valid
input to the software.
 Equivalence Partitioning : It is a type of black box testing in which the
input data is divided into equivalent valid and invalid parts from which
test cases are derived.
 Boundary Testing: It is a type of black box testing where boundary
values of valid and invalid partitions are provided as input to the
software to look for defects in the output as edge cases are more likely
to produce errors than values inside partitions.
4. Setting Up the Test Environment: The test environment is defined as
the place that may be hardware or software where the software under
development will be tested. The environment simulates the appropriate
network configuration required by the software to be tested to its full
potential. It is an independent activity and can be started along with test
case development. In this process, the testing team is not involved. either
the developer or the customer creates the testing environment.
5. Execution of Tests: After the development of the test cases and
setting up the test environment, the test execution step takes place. In this
step, the testing team starts executing the test cases based on the
prepared test cases in the earlier steps.
6. Closure of Tests: This is the last step of a formal testing process
where the testing is analysed.

Advantages of Formal Testing

Below are some of the advantages of formal testing:


 Uncovers ambiguity: It uncovers the ambiguity, and inconsistent and
unfinished data in the software.
 Gets better in every iteration: It provided better results every time the
steps are repeated.
 Less Complexity: With proper testing and documentation, the
complexity is reduced.
 Bug-Free: It results in a version of the software that is almost free from
any bugs.

Disadvantages of Formal Testing


Below are some of the disadvantages of formal testing:
 Time Consuming: Formal testing process takes a lot of time due to its
long list of planning and documentation.
 Expensive: It is also a very expensive task due to the cost of all the
test case generation, documentation, and planning.
 Difficult to implement: It is difficult to implement with non-technical
staff who don’t have proper knowledge about the software.
 Requires Training: The testers and developers working on the
automation of testing need to be trained since developers having the
knowledge of implementing formal modes of testing is very limited.

You might also like