Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
3 views

software testing unit II final (1)

The document outlines the essential components and processes involved in test planning for software development, including the types of test plans, steps to write a test plan, and the roles and responsibilities of team members. It emphasizes the importance of a structured test plan to ensure effective testing, risk management, and clear communication among team members. Additionally, it discusses various testing phases, methodologies, and the significance of static and unit testing in the software development lifecycle.

Uploaded by

adlincse
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

software testing unit II final (1)

The document outlines the essential components and processes involved in test planning for software development, including the types of test plans, steps to write a test plan, and the roles and responsibilities of team members. It emphasizes the importance of a structured test plan to ensure effective testing, risk management, and clear communication among team members. Additionally, it discusses various testing phases, methodologies, and the significance of static and unit testing in the software development lifecycle.

Uploaded by

adlincse
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

UNIT II TEST PLANNING 6

The Goal of Test Planning, High Level Expectations,Intergroup Responsibilities, Test Phases, Test
Strategy, Resource Requirements, Tester Assignments, Test Schedule, Test Cases, Bug
Reporting, Metrics and Statistics.

The Goal of Test Planning

A test plan is a detailed document which describes software testing areas and activities. It
outlines the test strategy, objectives, test schedule, required resources (human resources,
software, and hardware), test estimation and test deliverables.

The test plan is a base of every software's testing. It is the most crucial activity which ensures
availability of all the lists of planned activities in an appropriate sequence.

The test plan is a template for conducting software testing activities as a defined process that
is fully monitored and controlled by the testing manager. The test plan is prepared by the Test
Lead (60%), Test Manager(20%), and by the test engineer(20%).

Types of Test Plan

 Master Test Plan


 Phase Test Plan
 Testing Type Specific Test Plans

Master Test Plan

Master Test Plan is a type of test plan that has multiple levels of testing. It includes a complete test
strategy.

Phase Test Plan

A phase test plan is a type of test plan that addresses any one phase of the testing strategy. For
example, a list of tools, a list of test cases, etc.

Specific Test Plans

Specific test plan designed for major types of testing like security testing, load testing, performance
testing, etc. In other words, a specific test plan designed for non-functional testing.
How to write a Test Plan

Making a test plan is the most crucial task of the test management process. According to IEEE 829,
follow the following seven steps to prepare a test plan.

 First, analyze product structure and architecture.


 Now design the test strategy.
 Define all the test objectives.
 Define the testing area.
 Define all the useable resources.
 Schedule all activities in an appropriate manner.
 Determine all the Test Deliverables.

Test plan components or attributes

The test plan consists of various parts, which help us to derive the entire testing activity.

Objectives: It consists of information about modules, features, test data etc., which indicate the aim
of the application means the application behavior, goal, etc.

Scope: It contains information that needs to be tested with respective of an application. The Scope can
be further divided into two parts:
 In scope
 Out scope

In scope: These are the modules that need to be tested rigorously (in-detail).

Example: In an application A, B, C, D features have to be developed, but the B feature has already
been designed by other companies. So the development team will purchase B from that company and
perform only integrated testing with A, B, C.

Testing Methodology: The methods that are going to be used for testing depend on application to
application. The testing methodology is decided based on the feature and application requirements.

Since the testing terms are not standard, one should define what kind of testing will be used in the
testing methodology. So that everyone can understand it.

Approach: The approach of testing different software is different. It deals with the flow of
applications for future references. It has two aspects:

 High-Level Scenarios: For testing critical features high-level scenarios are written. For
Example, login to a website, booking from a website.
 The Flow Graph: It is used when one wants to make benefits such as converging and
merging easy.

Assumptions: In this phase, certain assumptions will be made.

Example:

The testing team will get proper support from the development team.

The tester will get proper knowledge transfer from the development team.

Proper resource allocation will be given by the company to the testing department.

Risk: All the risks that can happen if the assumption is breaking. For Example, in the case of wrong
budget estimation, the cost may overrun. Some reason that may lead to risk is:

 Test Manager has poor management skills.


 Hard to complete the project on time.
 Lack of cooperation.

Backup/Mitigation Plan-

If any risk is involved then the company must have a backup plan, the purpose is to avoid errors.
Some points to resolve/avoid risk
 Test priority is to be set for each test activity.
 Managers should have leadership skills.
 Training course for the testers.

Roles and Responsibilities: All the responsibilities and role of every member in a particular testing
team has to be recorded.

Example:

 Test Manager: Manages the project, takes an appropriate resource and gives project direction.
 Tester: Identify the testing technique, verify the test approach, and save project cost.

Scheduling: Under this, it will record the start and the end date of each and every testing-related
activity. For Example, writing test case date and ending test case date.

Defect Tracking: It is an important process in software engineering as lots of issue arises when you
develop a critical system for business. If there is any defect found while testing and that defect must
be given to the developer team. There are the following methods for the process of defect tracking:

 Information Capture: In this, we take basic information to begin the process.


 Prioritize: The task is prioritized based on severity and importance.
 Communicate: Communication between the identifier of bug and fixer of bug.
 Environment: Test the application based on hardware and software.

Example: The bug can be identified using bug tracking tools such as Jira, Mantis, Trac.

Test Environment- It is the environment which the testing team will use i.e. the list of hardware and
software, while testing the application, the things which are said to be tested will be written under this
section. The installation of software is also checked under this.

Example:

Software configuration on different operating systems, such as Windows, Linux, Mac, etc.

Hardware Configuration depends on RAM, ROM, etc.

Entry and Exit Criteria: The set of conditions that should be met in order to start any new type of
testing or to end any kind of testing.

Entry Condition:

 Necessary resources must be ready.


 The application must be prepared.
 Test data should be ready.

Exit Condition:

 There should not be any major bug.


 Most test cases should be passed.
 When all test cases are executed.

Example: If the team member report 45% of the test cases failed, then testing will be suspended until
the developer team fixes all defects.

Test Automation: It consists of the features that are to be automated and which features are not to be
automated.

 If the feature has lots of bugs then it is categorized as Manual Testing.


 If the feature is frequently tested then it can be automated.

Deliverables- It is the outcome from the testing team and that is to be given to the customers at the
end of the project.

 Before testing phase:


 Test plan document.
 Test case document.
 Test design specification.
During testing phase:

 Test scripts.
 Test data.
 Error logs.

After testing phase:

 Test Reports.
 Defect Report.
 Installation Report.

It contains a test plan, defect report, automation report, assumption report, tools, and other
components that have been used for developing and maintaining the testing effort.

Templated: It is followed by every kind of report that is going to be prepared by the testing team.

What If There is No Test Plan?

Below are some of the situations that may occur if there is no test plan in place:

 Misunderstanding roles and responsibilities.


 The test team will have no clear objective.
 No surety when the process ends.
 Undefined test scope may confuse testers.

Tester Roles and Responsibilities

Responsibilities of a Test Manager:

− Manage the Testing Department.

− Allocate resource to projects.

− Review weekly Testers' status reports and take necessary actions

− Escalate Testers' issues to the Sr. Management.

− Estimate for testing projects.

− Enforce the adherence to the company's Quality processes and procedures.

− Decision to procure Software Testing tools for the organization.


− Inter group co-ordination between various departments.

− Provide technical support to the Testing team.

− Continuous monitoring and mentoring of Testing team members.

− Review of test plans and test cases.

− Attend weekly meeting of projects and provide inputs from the Testers' perspective.

− Immediate notification/escalation of problems to the Sr. Test Manager / Senior Management.

− Ensure processes are followed as laid down.

Responsibilities of a Test Lead:

− Prepare the Software Test Plan.

− Check / Review the Test Cases document

- System Integration and User Acceptance prepared by test engineers.

− Analyze requirements during the requirements analysis phase of projects.

− Keep track of the new requirements from the Project.

− Forecast / Estimate the Project future requirements.

− Arrange the Hardware and software requirement for the Test Setup.

− Develop and implement test plans.

− Escalate the issues about project requirements (Software Hardware Resources) to Project Manager /
Test Manager.

− Escalate the issues in the application to the Client.

− Assign task to all Testing Team members and ensure that all of them have sufficient work in the
project.

− Organize the meetings.

− Prepare the Agenda for the meeting for example: Weekly Team meeting etc.

− Attend the regular client call and discuss the weekly status with the client.

− Send the Status Report (Daily Weekly etc.) to the Client.


− Frequent status check meetings with the Team.

− Communication by means of Chat / emails etc. with the Client (If required).

− Act as the single point of contact between Development and Testers for iterations Testing and
deployment activities.

− Track and report upon testing activities including testing results test case coverage required
resources defects discovered and their status performance baselines etc.

− Assist in performing any applicable maintenance to tools used in Testing and resolve issues if any.

− Ensure content and structure of all Testing documents / artifacts is documented and maintained.

− Document implement monitor and enforce all processes and procedures for testing is established as
per standards defined by the organization.

− Review various reports prepared by Test engineers.

− Log project related issues in the defect tracking tool identified for the project.

− Check for timely delivery of different milestones.

− Identify Training requirements and forward it to the Project Manager (Technical and Soft skills).

− Attend weekly Team Leader meeting.

− Motivate team members.

− Organize / Conduct internal trainings on various products.

Responsibilities of a tester:

- Understand project requirements.

- Prepare Update the Test case document for testing of the application from all aspects.

- Prepare the test setup.

- Deploy the build in the required setup.

- Conduct the Testing including Sanity and functional Execute the Test cases.

-Update the Test Result document.

- Attend the Regular client calls.


- Log / File the defects in Defect tracking tool / Bug Report.

- Verify defects.

- Discuss doubts/queries with Development Team / Client.

- Conduct internal trainings on various products.

Test Phases

There are 5 different types of Test Phases

1.Static Testing

2.Unit Testing

3.Integration Testing

4.System Testing

5.Acceptance Testing

Let see in detail

1.Static Testing

Static testing is a verification process used to test the application without implementing the code of
the application. And it is a cost-effective process.

To avoid the errors, we will execute Static testing in the initial stage of development because it is
easier to identify the sources of errors, and it can fix easily.

In other words, we can say that Static testing can be done manually or with the help of tools to
improve the quality of the application by finding the error at the early stage of development; that is
also called the verification process.

We can do some of the following important activities while performing static testing:

 Business requirement review


 Design review
 Code walkthroughs
 The test documentation review

What are the different features we can test in Static Testing?

We can test the various testing activities in Static Testing, which are as follows:
 BRD [Business Requirements Document]
 Functional or system Requirements
 Unit Use Cases
 Prototype
 Prototype Specification Document
 Test Data
 DB Fields Dictionary Spreadsheet

Why we need Static Testing?

We required Static testing whenever we encounter the following situation while testing an application
or the software:

 Dynamic Testing is time-consuming


 Flaws at earlier stages/identification of Bugs
 Dynamic Testing is expensive
 Increased size of the software

Dynamic Testing is time-consuming

We need static testing to test the application as dynamic testing is time-taking process even though the
dynamic testing identifies the bug and provides some information about the bug.

Flaws at earlier stages/identification of Bugs

When we are developing the software, we cannot completely rely on Dynamic testing as it finds the
bugs or defects of the application/software at a later stage because it will take the programmer's plenty
of time and effort to fix the bugs.

Dynamic Testing is expensive

We need to perform the static testing on the software product because dynamic testing is more
expensive than static testing. Involving the test cases is expensive in dynamic testing as the test cases
have been created in the initial stages.

And we also need to preserve the implementation and validation of the test case, which takes lots of
time from the test engineers.
Static Testing Techniques

Static testing techniques offer a great way to enhance the quality and efficiency of software
development. The Static testing technique can be done in two ways, which are as follows:

 Review
 Static Analysis

Review

In static testing, the review is a technique or a process implemented to find the possible bugs in the
application. We can easily identify and eliminate faults and defects in the various supporting
documents such as SRS [Software Requirements Specifications] in the review process.In other words,
we can say that a review in Static Testing is that where all the team members will understand about
the project's progress.

In static testing, reviews can be divided into four different parts, which are as follows:

 Informal reviews
 Walkthroughs
 Technical/peer review
 Inspections
Informal reviews

In informal review, the document designer place the contents in front of viewers, and everyone gives
their view; therefore, bugs are acknowledged in the early stage.

Walkthrough

Generally, the walkthrough review is used to performed by a skilled person or expert to verify the
bugs. Therefore, there might not be problem in the development or testing phase.

Peer review

In Peer review, we can check one another's documents to find and resolve the bugs, which is generally
done in a team.

Inspection

In review, the inspection is essentially verifying the document by the higher authority, for example,
the verification of SRS [software requirement specifications] document.

Static Analysis

Another Static Testing technique is static analysis, which is used to contain the assessment of the code
quality, which is established by developers

We can use the different tools to perform the code's analysis and evaluation of the same.

In other words, we can say that developers' developed code is analyzed with some tools for structural
bugs, which might cause the defects.

The static analysis will also help us to identify the below errors:

 Dead code
 Unused variables
 Endless loops
 Incorrect syntax
 Variable with undefined value

In static testing, Static Analysis can be further classified into three parts, which are as discuss below:
Data Flow: In static analysis, the data flow is connected to the stream processing.

Control Flow: Generally, the control flow is used to specify how the commands or instructions are
implemented.

Cyclomatic Complexity: It is the measurement of the program's complexity, which is mostly linked
to the number of independent paths in the control flow graph of the program.

Tools used for Static Testing

 CheckStyle
 SourceMeter
 Soot

Advantages of Static Testing

 Improved product quality


 Reduce SDLC cost
 Immediate evaluation and feedback
Unit Testing

Unit testing is a type of software testing that focuses on individual units or components of a
software system. The purpose of unit testing is to validate that each unit of the software
works as intended and meets the requirements. Unit testing is typically performed by
developers, and it is performed early in the development process before the code is integrated
and tested as a whole system.

Unit tests are automated and are run each time the code is changed to ensure that new code
does not break existing functionality. Unit tests are designed to validate the smallest possible
unit of code, such as a function or a method, and test it in isolation from the rest of the
system. This allows developers to quickly identify and fix any issues early in the
development process, improving the overall quality of the software and reducing the time
required for later testing.

Unit Testing is defined as a type of software testing where individual components of a


software are tested. Unit Testing of the software product is carried out during the
development of an application. An individual component may be either an individual
function or a procedure. Unit Testing is typically performed by the developer. In SDLC or V
Model, Unit testing is the first level of testing done before integration testing. Unit testing is
such a type of testing technique that is usually performed by developers. Although due to the
reluctance of developers to test, quality assurance engineers also do unit testing.

Objective of Unit Testing:

The objective of Unit Testing is:

 To isolate a section of code.


 To verify the correctness of the code.
 To test every function and procedure.
 To fix bugs early in the development cycle and to save costs.
 To help the developers to understand the code base and enable them to make changes
quickly.
 To help with code reuse.

Types of Unit Testing:

There are 2 types of Unit Testing: Manual, and Automated.


Workflow of Unit Testing:

Unit Testing Techniques:

There are 3 types of Unit Testing Techniques. They are

Black Box Testing: This testing technique is used in covering the unit tests for input, user
interface, and output parts.

White Box Testing: This technique is used in testing the functional behavior of the system
by giving the input and checking the functionality output including the internal design
structure and code of the modules.

Gray Box Testing: This technique is used in executing the relevant test cases, test methods,
test functions, and analyzing the code performance for the modules.

Unit Testing Tools:

Here are some commonly used Unit Testing tools:

 Jtest
 Junit
 NUnit
 EMMA
 PHPUnit

Advantages of Unit Testing:

 Unit Testing allows developers to learn what functionality is provided by a unit and
how to use it to gain a basic understanding of the unit API.
 Unit testing allows the programmer to refine code and make sure the module works
properly.
 Unit testing enables testing parts of the project without waiting for others to be
completed
Disadvantages of Unit Testing:

 The process is time-consuming for writing the unit test cases.


 Unit Testing will not cover all the errors in the module because there is a chance of
having errors in the modules while doing integration testing.
 Unit Testing is not efficient for checking the errors in the UI(User Interface) part of
the module.

Integration Testing

Integration Testing is defined as a type of testing where software modules are integrated
logically and tested as a group. A typical software project consists of multiple software
modules, coded by different programmers. The purpose of this level of testing is to expose
defects in the interaction between these software modules when they are integrated.

Integration Testing focuses on checking data communication amongst these modules. Hence
it is also termed as ‘I & T’ (Integration and Testing), ‘String Testing’ and sometimes ‘Thread
Testing’.

Although each software module is unit tested, defects still exist for various reasons like

A Module, in general, is designed by an individual software developer whose understanding


and programming logic may differ from other programmers. Integration Testing becomes
necessary to verify the software modules work in unity

At the time of module development, there are wide chances of change in requirements by the
clients. These new requirements may not be unit tested and hence system integration Testing
becomes necessary.

 Interfaces of the software modules with the database could be erroneous


 External Hardware interfaces, if any, could be erroneous
 Inadequate exception handling could cause issues.

Types of Integration Testing

Software Engineering defines variety of strategies to execute Integration testing, viz.

Big Bang Approach :


Incremental Approach: which is further divided into the following

 Top Down Approach


 Bottom Up Approach
 Sandwich Approach – Combination of Top Down and Bottom Up

Big Bang Testing

Big Bang Testing is an Integration testing approach in which all the components or modules
are integrated together at once and then tested as a unit. This combined set of components is
considered as an entity while testing. If all of the components in the unit are not completed,
the integration process will not execute.

Advantages:

 Convenient for small systems.

Disadvantages:

 Fault Localization is difficult.


 Given the sheer number of interfaces that need to be tested in this approach, some
interfaces link to be tested could be missed easily.
 Since the Integration testing can commence only after “all” the modules are designed,
the testing team will have less time for execution in the testing phase.
 Since all modules are tested at once, high-risk critical modules are not isolated and
tested on priority. Peripheral modules which deal with user interfaces are also not
isolated and tested on priority.

Incremental Testing

In the Incremental Testing approach, testing is done by integrating two or more modules that
are logically related to each other and then tested for proper functioning of the application.
Then the other related modules are integrated incrementally and the process continues until
all the logically related modules are integrated and tested successfully.

Incremental Approach, in turn, is carried out by two different Methods:


 Bottom Up
 Top Down

Stubs and Drivers

Stubs and Drivers are the dummy programs in Integration testing used to facilitate the
software testing activity. These programs act as a substitutes for the missing models in the
testing. They do not implement the entire programming logic of the software module but they
simulate data communication with the calling module while testing.

Stub: Is called by the Module under Test.

Driver: Calls the Module to be tested.

Bottom-up Integration Testing

Bottom-up Integration Testing is a strategy in which the lower level modules are tested first.
These tested modules are then further used to facilitate the testing of higher level modules.
The process continues until all modules at top level are tested. Once the lower level modules
are tested and integrated, then the next level of modules are formed.

Diagrammatic Representation:

Advantages:

 Fault localization is easier.


 No time is wasted waiting for all modules to be developed unlike Big-bang approach

Disadvantages:
 Critical modules (at the top level of software architecture) which control the flow of
application are tested last and may be prone to defects.
 An early prototype is not possible

Top-down Integration Testing

Top Down Integration Testing is a method in which integration testing takes place from top
to bottom following the control flow of software system. The higher level modules are tested
first and then lower level modules are tested and integrated in order to check the software
functionality. Stubs are used for testing if some modules are not ready.

Diagrammatic Representation:

Advantages:

 Fault Localization is easier.


 Possibility to obtain an early prototype.
 Critical Modules are tested on priority; major design flaws could be found and fixed
first.

Disadvantages:

 Needs many Stubs.


 Modules at a lower level are tested inadequately.

Sandwich Testing
Sandwich Testing is a strategy in which top level modules are tested with lower level
modules at the same time lower modules are integrated with top modules and tested as a
system. It is a combination of Top-down and Bottom-up approaches therefore it is called
Hybrid Integration Testing. It makes use of both stubs as well as drivers.

System Testing

System testing is a type of software testing that evaluates the overall functionality and
performance of a complete and fully integrated software solution. It tests if the system meets
the specified requirements and if it is suitable for delivery to the end-users. This type of
testing is performed after the integration testing and before the acceptance testing.

System Testing is a type of software testing that is performed on a complete integrated


system to evaluate the compliance of the system with the corresponding requirements. In
system testing, integration testing passed components are taken as input. The goal of
integration testing is to detect any irregularity between the units that are integrated together.
System testing detects defects within both the integrated units and the whole system. The
result of system testing is the observed behavior of a component or a system when it is tested.
System Testing is carried out on the whole system in the context of either system requirement
specifications or functional requirement specifications or in the context of both.

System testing tests the design and behavior of the system and also the expectations of the
customer. System Testing is basically performed by a testing team that is independent of the
development team that helps to test the quality of the system impartial. It has both functional
and non-functional testing. System Testing is a black-box testing. System Testing is
performed after the integration testing and before the acceptance testing.
System Testing is performed in the following steps:

Test Environment Setup: Create testing environment for the better quality testing.

Create Test Case: Generate test case for the testing process.

Create Test Data: Generate the data that is to be tested.

Execute Test Case: After the generation of the test case and the test data, test cases are
executed.

Defect Reporting: Defects in the system are detected.

Regression Testing: It is carried out to test the side effects of the testing process.

Log Defects: Defects are fixed in this step.

Retest: If the test is not successful then again test is performed.

Types of System Testing:

Performance Testing: Performance Testing is a type of software testing that is carried out to
test the speed, scalability, stability and reliability of the software product or application.

Load Testing: Load Testing is a type of software Testing which is carried out to determine
the behavior of a system or software product under extreme load.
Stress Testing: Stress Testing is a type of software testing performed to check the robustness
of the system under the varying loads.

Scalability Testing: Scalability Testing is a type of software testing which is carried out to
check the performance of a software application or system in terms of its capability to scale
up or scale down the number of user request load.

Tools used for System Testing :

 JMeter
 Gallen Framework
 Selenium

Advantages of System Testing:

 Verifies the overall functionality of the system.


 Detects and identifies system-level problems early in the development cycle.
 Helps to validate the requirements and ensure the system meets the user needs.
 Improves system reliability and quality.

Disadvantages of System Testing:

 Can be time-consuming and expensive.


 Requires adequate resources and infrastructure.
 Can be complex and challenging, especially for large and complex systems.
 Dependent on the quality of requirements and design documents.

Acceptance Testing

It is a formal testing according to user needs, requirements and business processes conducted
to determine whether a system satisfies the acceptance criteria or not and to enable the users,
customers or other authorized entities to determine whether to accept the system or not.

Acceptance Testing is the last phase of software testing performed after System Testing and
before making the system available for actual use.

Types of Acceptance Testing:


User Acceptance Testing (UAT): User acceptance testing is used to determine whether the
product is working for the user correctly. Specific requirements which are quite often used by
the customers are primarily picked for the testing purpose. This is also termed as End-User
Testing.

Business Acceptance Testing (BAT): BAT is used to determine whether the product meets
the business goals and purposes or not. BAT mainly focuses on business profits which are
quite challenging due to the changing market conditions and new technologies so the current
implementation may have to being changed which results in extra budgets.

Contract Acceptance Testing (CAT): CAT is a contract that specifies that once the product
goes live, within a predetermined period, the acceptance test must be performed and it should
pass all the acceptance use cases. Here is a contract termed a Service Level Agreement
(SLA), which includes the terms where the payment will be made only if the Product services
are in-line with all the requirements, which means the contract is fulfilled. Sometimes, this
contract happens before the product goes live. There should be a well-defined contract in
terms of the period of testing, areas of testing, conditions on issues encountered at later
stages, payments, etc.

Regulations Acceptance Testing (RAT): RAT is used to determine whether the product
violates the rules and regulations that are defined by the government of the country where it
is being released. This may be unintentional but will impact negatively on the business.
Generally, the product or application that is to be released in the market, has to go under
RAT, as different countries or regions have different rules and regulations defined by its
governing bodies. If any rules and regulations are violated for any country then that country
or the specific region then the product will not be released in that country or region. If the
product is released even though there is a violation then only the vendors of the product will
be directly responsible.

Operational Acceptance Testing (OAT): OAT is used to determine the operational


readiness of the product and is non-functional testing. It mainly includes testing of recovery,
compatibility, maintainability, reliability, etc. OAT assures the stability of the product before
it is released to production.

Alpha Testing: Alpha testing is used to determine the product in the development testing
environment by a specialized testers team usually called alpha testers.
Beta Testing: Beta testing is used to assess the product by exposing it to the real end-users,
usually called beta testers in their environment. Feedback is collected from the users and the
defects are fixed. Also, this helps in enhancing the product to give a rich user experience.

Use of Acceptance Testing:

 To find the defects missed during the functional testing phase.


 How well the product is developed.
 A product is what actually the customers need.
 Feedback help in improving the product performance and user experience.
 Minimize or eliminate the issues arising from the production.

Advantages of Acceptance Testing :

 This testing helps the project team to know the further requirements from the users directly as
it involves the users for testing.
 Automated test execution.
 It brings confidence and satisfaction to the clients as they are directly involved in the testing
process.
 It is easier for the user to describe their requirement.
 It covers only the Black-Box testing process and hence the entire functionality of the product
will be tested.

Disadvantages of Acceptance Testing :

 Users should have basic knowledge about the product or application.


 Sometimes, users don’t want to participate in the testing process.
 The feedback for the testing takes long time as it involves many users and the opinions may
differ from one user to another user.
 Development team is not participated in this testing process.

Test Strategy

A high-level document is used to validate the test types or levels to be executed for the product and
specify the Software Development Life Cycle's testing approach is known as Test strategy document.

Once the test strategy has been written, we cannot modify it, and it is approved by the Project
Manager, development team.
The test strategy also specifies the following details, which are necessary while we write the test
document:

What is the other procedure having to be used?

Which module is going to be tested?

Which entry and exit criteria apply?

Which type of testing needs to be implemented?

In other words, we can say that it is a document, which expresses how we go about testing the
product. And the approaches can be created with the help of following aspects:

 Automation or not
 Resource point of view

Components of Test Strategy Document

We understand that the test strategy document is made during the requirements phase and after the
requirements have been listed.

Like other testing documents, the test strategy document also includes various components, such as:
 Scope and Overview
 Testing Methodology
 Testing Environment Specifications
 Testing Tools
 Release Control
 Risk Analysis
 Review and Approvals

Scope and Overview

 The first component of the test strategy document is Scope and Overview.
 The overview of any product contains the information on who should approve, review and use
the document.
 The test strategy document also specified the testing activities and phases that are needed to
be approved.

Testing Methodology

 The next module in the test strategy document is Testing methodology, which is mainly used
to specify thelevels of testing, testing procedure, roles, and responsibilities of all the team
members.
 The testing approach also contains the change management process involving the
modification request submission, pattern to be used, and activity to manage the request.
 Above all, if the test strategy document is not established appropriately, then it might lead to
errors or mistakes in the future.

Testing Environment Specifications

 Another component of the test strategy document is Testing Environment Specification.


 As we already aware of the specification of the test datarequirements is exceptionally
significant. Hence, clear guidelines on how to prepare test data are involved in the testing
environment specification of the test strategy document.
 This module specifies the information related to the number of environments and the setup
demanded.
 The backup and restore strategies are also offered to ensure that there is no data loss because
of the coding or programming issues.
Testing Tools

 Testing tools are another vital component of the test strategy document, as it stipulates the
complete information about the test management and automation tools necessary for test
execution activity.;
 For security, performance, load testing, the necessary methodologies, and tools are
defined by the details of the open-source or commercial tool and the number of users that
can be kept by it.

Release Control

 Another important module of the test strategy document is Release Control.


 It is used to ensure that the correct and effective test execution and release management
strategies should be systematically developed.

Risk Analysis

 The next component of the test strategy document is Risk Analysis.


 In the test strategy document, all the possible risks are described linked to the project, which
can become a problem in test execution.
 Furthermore, for inclining these risks, a clear strategy is also formed in order to make sure
that they are undertaking properly.
 We also create a contingency plan if the development team faces these risks in real-time.

Review and Approvals

 The last component of the Testing strategy document is Review and Approval.
 When all the related testing activities are specified in the test strategy document, it is
reviewed by the concerned people like:
 System Administration Team
 Project Management Team
 Development Team
 Business Team

Types of Test Strategies

Here, we are discussing some of the significant types of test strategies document:
Methodical Strategy

 The first part of test strategy document is Methodical strategy.


 In this, the test teams follow a set of test conditions, pre-defined quality standard(like
ISO25000), checklists.
 The Standard checklists is occurred for precise types of testing, such as security testing.

Reactive Strategy

 The next type of test strategy is known as Reactive strategy.


 In this, we can design the test and execute them only after the real software is delivered,
Therefore, the testing is based upon the identified defectsin the existing system.
 Suppose, we have used the exploratory testing, and the test approvals are established derived
from the existing aspects and performances.
 These test approvals are restructured based on the outcome of the testing which is
implemented by the test engineer.

Analytical strategy

 Another type of test strategy is Analytical strategy, which is used to perform testing based on
requirements, and requirements are analyzed to derive the test conditions. And then tests are
designed, implemented, and performed to encounter those requirements. For example, risk-
based testing or requirements-based testing.
 Even the outcomes are recorded in terms of requirements, such as requirements tested and
passed.
Standards compliant or Process compliant strategy

 In this type of test strategy, the test engineer will follow the procedures or guidelines created
by a panel of industry specialists or committee standards to find test conditions, describe test
cases, and put the testing team in place.
 Suppose any project follows the ScrumAgile technique. In that case, the test engineer will
generate its complete test strategy, beginning from classifying test criteria, essential test cases,
performing tests, report status, etc., around each user story.
 Some good examplesof the standards-compliant process are Medical systems following US
FDA (Food and Drugs Administration) standards.

Model-based strategy

 The next type of test strategy is a model-based strategy. The testing team selects the current or
expected situation and produces a model for it with the following aspects: inputs, outputs,
processes, and possible behavior.
 And the models are also established based on the current data speeds, software, hardware,
infrastructure, etc.

Regression averse strategy

 In the regression averse strategy, the test engineer mainly emphasizes decreasing regression
risks for functional or non-functional product shares.
 For example, suppose we have one web application to test the regressionissues for the
particular application. The testing team can develop the test automation for both typical and
exceptional use cases for this scenario.
 And to facilitate the tests can be run whenever the application is reformed, the testing team
can use GUI-based automation tools.

Consultative strategy

 The consultative strategy is used to consult key investors as input to choose the scope of test
conditions as in user-directed testing.
 In order of priority, the client will provide a list of browsers and their versions, operating
systems, a list of connection types, anti-malware software, and also the contradictory list,
which they want to test the application.
 As per the need of the items given in provided lists, the test engineer may use the various
testing techniques, such as equivalence partitioning
Resource Requirements

Project resources simply mean resources that are required for successful development and completion
of project. These resources can be capital, people, material, tool, or supplies that are helpful to carry
out certain tasks in project. Without these resources, it is impossible to complete project. In project
planning phase, identification of resources that are required for completion of project and how they
will be allocated is key element and very important task to do. In project management, some resources
that are required are assigned to each task of project to get job done.

There are three types of resources that are considered and are very essential for execution of project
and completion of project on time and on budget. These resources can be denoted by pyramid which
is also known as Resource Pyramid. At base of pyramid i.e. last layer, hardware and software tools are
present, then at middle layer, reusable components are present, and at top of pyramid i.e. top layer,
human resources are present. This is shown in following diagram :

When software planner wants to specify resources, they specify it using four characteristics :

 Description of resource
 Resource availability
 Time of resource when it will be available
 Duration of resource availability
Types of resources :

Human Resource –

Human plays an important role in software development process. No matter what size is and how
much complexity is there in project, if you want to perform project task in an effective manner, then
human resources are very essential. In software industry, people are assigned some organizational
positions such as manager, software developer, software testing, engineer, and so on. These positions
are according to their skills and specialty.

For small project only, single individual can perform all these roles. But for large project, team of
people works on it. The total number of people that are required for project is estimated by calculating
development effort inters of person-months.

Reusable Components –

For bringing ease in software development process or to accelerate development process software,
industry prefers to use some ready software components. The components can be defined as the
software building blocks that can be created and reused in software development process. Generally,
regardless of their type, size, or complexity, all projects need money. Managing budget for project is
one of most important tasks that all project managers have to do. The reusable resources also known
as cost resources are very helpful as they help in reducing overall cost of development. The use of
components emphasizes reusability. This is also termed as Component-Based Software Engineering.

Hardware and Software tools –

These are actually material resources that are part of project. This type of resource should be planned
before starting development of project otherwise it way causes problems for the project.

For example, if you require certain software elements during performing task and somehow you can’t
manage to get them on time, even they could take few weeks to ship from manufacturer and this will
cause delay to your project.
Tester Assignments

1.What qualities tester should have?

 You understand priorities


 You ask questions
 You can create number if ideas
 You can analyze data
 You can report negative things in positive way
 You are good at reporting

Test Schedule

A test schedule includes the testing steps or tasks, the target start and end dates, and responsibilities. It
should also describe how the test will be reviewed, tracked, and approved.

Test Schedule Template


Test Cases

The test case is defined as a group of conditions under which a tester determines whether a software
application is working as per the customer's requirements or not. Test case designing includes
preconditions, case name, input conditions, and expected result. A test case is a first level action and
derived from test scenarios.
Test case gives detailed information about testing strategy, testing process, preconditions, and
expected output. These are executed during the testing process to check whether the software
application is performing the task for that it was developed or not.

When do we write a test case?

 When the customer gives the business needs then, the developer starts developing and says
that they need 3.5 months to build this product.
 And In the meantime, the testing team will start writing the test cases.
 Once it is done, it will send it to the Test Lead for the review process.
 And when the developers finish developing the product, it is handed over to the testing team.
 The test engineers never look at the requirement while testing the product document because
testing is constant and does not depends on the mood of the person rather than the quality of
the test engineer.

Why we write the test cases?

1. To require consistency in the test case execution


2. To make sure a better test coverage
3. It depends on the process rather than on a person
4. To avoid training for every new test engineer on the product

To require consistency in the test case execution: we will see the test case and start testing the
application.

To make sure a better test coverage: for this, we should cover all possible scenarios and document
it, so that we need not remember all the scenarios again and again.

It depends on the process rather than on a person: A test engineer has tested an application during
the first release, second release, and left the company at the time of third release. As the test engineer
understood a module and tested the application thoroughly by deriving many values. If the person is
not there for the third release, it becomes difficult for the new person. Hence all the derived values are
documented so that it can be used in the future.
To avoid giving training for every new test engineer on the product: When the test engineer
leaves, he/she leaves with a lot of knowledge and scenarios. Those scenarios should be documented so
that the new test engineer can test with the given scenarios and also can write the new scenarios.

Types of test cases

We have a different kind of test cases, which are as follows:

 Function test cases


 Integration test cases
 System test cases

Here, we are writing a test case for the ICICI application’s Login module:
The functional test cases

Firstly, we check for which field we will write test cases and then describe accordingly.

In functional testing or if the application is data-driven, we require the input column else; it is a bit
time-consuming.

Rules to write functional test cases:

 In the expected results column, try to use should be or must be.


 Highlight the Object names.
 We have to describe only those steps which we required the most; otherwise, we do not need
to define all the steps.
 To reduce the excess execution time, we will write steps correctly.
 Write a generic test case; do not try to hard code it.

Let say it is the amount transfer module, so we are writing the functional test cases for it and then also
specifies that it is not a login feature.
The functional test case for amount transfer module is in the below Excel file:

Integration test case

In this, we should not write something which we already covered in the functional test cases, and
something we have written in the integration test case should not be written in the system test case
again.

Rules to write integration test cases

 Firstly, understand the product


 Identify the possible scenarios
 Write the test case based on the priority
 When the test engineer writing the test cases, they may need to consider the following
aspects:

If the test cases are in details:

 They will try to achieve maximum test coverage.


 All test case values or scenarios are correctly described.
 They will try to think about the execution point of view.
 The template which is used to write the test case must be unique.

System test cases

We will write the system test cases for the end-to-end business flows. And we have the entire modules
ready to write the system test cases.

The process to write test cases

The method of writing a test case can be completed into the following steps, which are as below:
System study

In this, we will understand the application by looking at the requirements or the SRS, which is given
by the customer.

Identify all scenarios:

When the product is launched, what are the possible ways the end-user may use the software to
identify all the possible ways.

I have documented all possible scenarios in a document, which is called test design/high-level design.

The test design is a record having all the possible scenarios.

Write test cases

Convert all the identified scenarios to test claims and group the scenarios related to their features,
prioritize the module, and write test cases by applying test case design techniques and use the standard
test case template, which means that the one which is decided for the project.

Review the test cases

Review the test case by giving it to the head of the team and, after that, fix the review feedback given
by the reviewer.

Test case approval

After fixing the test case based on the feedback, send it again for the approval.

Store in the test case repository

After the approval of the particular test case, store in the familiar place that is known as the test case
repository.

Bug Reporting

A bug is an error or defect in software that causes it to behave unexpectedly or produce incorrect
results. A bug may be due to an error in the design, code, or configuration of the software.

For example, Let's say you transfer money to a friend using the mobile banking app, but when you
enter the amount to transfer and click the "Send" button, the app crashes and closes. This is an error in
the transfer function of the application and will prevent you from completing the transaction. The
error can be caused by several factors, such as a memory leak in the application code or problems in
communication between the application and the bank's servers.
Introduction to Bug Report in Software Testing

A bug report is a document that identifies and describes a software problem or defect. A bug report
contains important information that developers can reproduce and fix the problem. A well-written bug
report should include relevant information such as software version, operating system, and any other
relevant information about the environment or configuration. The report should also include a clear
and concise description of the problem, including steps to reproduce it, associated files or error
messages, and assigned severity and priority.

Importance of Bug Report

Bug reports are important in software development and testing for several reasons.

Bug detection:

Bug reports help identify software problems or bugs, allowing developers to fix them before the
software is released to users. Identifying problems early in the development process can save time and
resources in the long run.

Reproducibility of the Bug:

Bug report provides detailed information about the reproducibility of the problem, which is essential
for developers to understand and solve the problem. Adding screenshots or recordings of the user
interface is a valuable addition to the bug report. Combined with clear and detailed steps to reproduce
the error, this can provide important context for troubleshooting.

Bug prioritization:

Bug reports help prioritize issues by determining their severity and priority levels. This ensures that
critical issues are dealt with first and that less critical issues are resolved later.

Communication:

Bug reports provide a means of communication between testers and developers, allowing them to
work together to find solutions to problems.

Improve quality:

Solving bugs identified in bug reports can improve the quality of software, making it more reliable,
secure, and user-friendly.

How to Write a Bug Report

When writing a bug report, follow these guidelines:


Identify the bug:

First, identify the problem or bug you encountered in the software. Restate the problem and note the
steps you took to get to that point.

Gather information:

Gather any relevant information you can find about the problem, such as software version, operating
system, hardware configuration, and error messages or screenshots.

Write a clear and concise title:

The title of a bug report should be short and summarize the problem in a few words.

Issue Description:

Write a detailed description of the problem, explaining what's happening, how it's happening, and
why it's a problem. Please provide the exact steps to reproduce the problem and any error messages or
logs that may help identify the root cause.

Assign a Severity Level:

Assign a severity level to a problem based on how it affects the software and users. Use a severity
scale from low to critical, with critical indicating a major impact on the software.

Assign a priority level:

Assign a priority level to an issue based on its urgency, importance, and impact on the software and
users. Use a scale from low to critical priority, which indicates that the problem needs to be addressed
immediately.

Add relevant files:

If you have relevant files (such as screenshots, error messages, or logs), attach them to the bug report.
These files help developers understand the problem and its root cause.

File a bug report:

Once you've filed a bug report, submit it to the appropriate team or person responsible for resolving
the issue.

By following these steps, you can create a comprehensive and effective bug report that will help
developers reproduce and resolve the issue effectively.
Bug Reporting Best Practices

Reproduce the problem:

Before creating a bug report, make sure you can reproduce the problem consistently. This helps
developers understand the problem and its root cause.

Provide specific steps:

When you describe the problem, give specific steps to reproduce it, including any input data, error
messages, or logs that may be relevant. This helps developers understand the problem and its context.

Be concise:

Write clear and concise bug reports, focusing on relevant details. Avoid providing irrelevant
information that may distract from the problem.

Assign Severity and Priority:

Assign a severity and priority level to a problem based on how it affects the software and users. This
helps prioritize issues and ensure critical issues are addressed first.

Use a standard format:

Using a standard format for bug reports, including title, description, environment, severity, priority,
retry steps, expected behavior, actual behavior, and attachments.

Testing in multiple environments:

Test the software in multiple environments to ensure that the problem is not limited to a specific
configuration.

Avoid duplication:

Before creating a new bug report, check if the problem has already been reported. Repeated error
reports can waste time and resources.

Include screenshots or videos:

Adding screenshots or videos helps developers understand the problem and its context, making it
easier to reproduce and solve it.

Bug Reporting Tools and Tracking Systems


Bug reporting tools and tracking systems are software tools that allow testers, developers, and project
managers to effectively track and manage software issues. Here are some popular bug-reporting tools
and tracking systems.

Jira:

Jira is a popular bug tracking and project management tool that allows teams to effectively track and
manage software issues, tasks, and workflows.

Bugzilla:

Bugzilla is an open source bug tracking system that allows teams to track and manage software issues,
bug reports, and fixes.

MantisBT:

MantisBT is an open source bug tracking system that allows teams to track and manage software
issues, bug reports and project workflows.

Metrics and Statistics

Metrics

The purpose of software testing metrics is to increase the efficiency and effectiveness of the software
testing process while also assisting in making better decisions for future testing by providing accurate
data about the testing process. A metric expresses the degree to which a system, system component, or
process possesses a certain attribute in numerical terms.

Importance of Metrics in Software Testing

Test metrics are essential in determining the software’s quality and performance. Developers may use
the right software testing metrics to improve their productivity.

Test metrics help to determine what types of enhancements are required in order to create a defect-
free, high-quality software product.

Make informed judgments about the testing phases that follow, such as project schedule and cost
estimates.

Examine the current technology or procedure to see if it need any more changes.

Types of Software Testing Metrics

Software testing metrics are divided into three categories:


Process Metrics: A project’s characteristics and execution are defined by process metrics. These
features are critical to the SDLC process’s improvement and maintenance (Software Development
Life Cycle).

Product Metrics: A product’s size, design, performance, quality, and complexity are defined by
product metrics. Developers can improve the quality of their software development by utilizing these
features.

Project Metrics: Project Metrics are used to assess a project’s overall quality. It is used to estimate a
project’s resources and deliverables, as well as to determine costs, productivity, and flaws.

Test Metrics Life Cycle

The below diagram illustrates the different stages in the test metrics life cycle.

The various stages of the test metrics lifecycle are:

Analysis:

The metrics must be recognized.

Define the QA metrics that have been identified.


Communicate:

Stakeholders and the testing team should be informed about the requirement for metrics.

Educate the testing team on the data points that must be collected in order to process the metrics.

Evaluation:

Data should be captured and verified.

Using the data collected to calculate the value of the metrics

Report:

Create a strong conclusion for the paper.

Distribute the report to the appropriate stakeholder and representatives.

Gather input from stakeholder representatives.

Formula for Test Metrics

To get the percentage execution status of the test cases, the following formula can be used:

Percentage test cases executed = (No of test cases executed / Total no of test cases written) x 100

Similarly, it is possible to calculate for other parameters also such as test cases that were not executed,
test cases that were passed, test cases that were failed, test cases that were blocked, and so on. Below
are some of the formulas:

1. Test Case Effectiveness:

Test Case Effectiveness = (Number of defects detected / Number of test cases run) x 100

2. Passed Test Cases Percentage: Test Cases that Passed Coverage is a metric that indicates the
percentage of test cases that pass.

Passed Test Cases Percentage = (Total number of tests ran / Total number of tests executed) x 100

3. Failed Test Cases Percentage: This metric measures the proportion of all failed test cases.

Failed Test Cases Percentage = (Total number of failed test cases / Total number of tests executed) x
100
4. Blocked Test Cases Percentage: During the software testing process, this parameter determines
the percentage of test cases that are blocked.

Blocked Test Cases Percentage = (Total number of blocked tests / Total number of tests executed) x
100

5. Fixed Defects Percentage: Using this measure, the team may determine the percentage of defects
that have been fixed.

Fixed Defects Percentage = (Total number of flaws fixed / Number of defects reported) x 100

6. Rework Effort Ratio: This measure helps to determine the rework effort ratio.

Rework Effort Ratio = (Actual rework efforts spent in that phase/ Total actual efforts spent in that
phase) x 100

7. Accepted Defects Percentage: This measures the percentage of defects that are accepted out of the
total accepted defects.

Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team / Total Defects Reported) x
100

8. Defects Deferred Percentage: This measures the percentage of the defects that are deferred for
future release.

Defects Deferred Percentage = (Defects deferred for future releases / Total Defects Reported) x 100

Example of Software Test Metrics Calculation

Let’s take an example to calculate test metrics:

S Data retrieved during test


No. Testing Metric case development

1 No. of requirements 5

The average number of test cases


2 40
written per requirement

Total no. of Test cases written for all


3 200
requirements
S Data retrieved during test
No. Testing Metric case development

4 Total no. of Test cases executed 164

5 No. of Test cases passed 100

6 No. of Test cases failed 60

7 No. of Test cases blocked 4

8 No. of Test cases unexecuted 36

9 Total no. of defects identified 20

Defects accepted as valid by the dev


10 15
team

11 Defects deferred for future releases 5

12 Defects fixed 12

1. Percentage test cases executed = (No of test cases executed / Total no of test cases written) x 100

= (164 / 200) x 100

= 82

2. Test Case Effectiveness = (Number of defects detected / Number of test cases run) x 100

= (20 / 164) x 100

= 12.2

3. Failed Test Cases Percentage = (Total number of failed test cases / Total number of tests
executed) x 100
= (60 / 164) * 100

= 36.59

4. Blocked Test Cases Percentage = (Total number of blocked tests / Total number of tests
executed) x 100

= (4 / 164) * 100

= 2.44

5. Fixed Defects Percentage = (Total number of flaws fixed / Number of defects reported) x 100

= (12 / 20) * 100

= 60

6. Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team / Total Defects
Reported) x 100

= (15 / 20) * 100

= 75

7. Defects Deferred Percentage = (Defects deferred for future releases / Total Defects Reported) x
100

= (5 / 20) * 100

= 25

Statistics

Statistical Testing is a testing method whose objective is to work out the undependable of software
package product instead of discovering errors. check cases ar designed for applied mathematics
testing with a wholly totally different objective than those of typical testing.
Operation Profile:

Different classes of users might use a software package for various functions. for instance, a
professional may use the library automation software package to make member records, add books to
the library, etc. whereas a library member may use to software package to question regarding the
provision of the book or to issue and come books. Formally, the operation profile of a software
package may be outlined because the chance distribution of the input of a mean user. If the input to a
variety of categories{Ci} is split, the chance price of a category represents the chance of a mean user
choosing his next input from this class. Thus, the operation profile assigns a chance price Pi to every
input category Ci.

Steps in Statistical Testing:

Statistical testing permits one to focus on testing those elements of the system that are presumably to
be used. the primary step of applied mathematics testing is to work out the operation profile of the
software package. a successive step is to get a group of check knowledge reminiscent of the
determined operation profile. The third step is to use the check cases to the software package and
record the time between every failure. once a statistically important range of failures are ascertained,
the undependable may be computed.

Advantages and Disadvantages of Statistical Testing:

Statistical testing permits one to focus on testing elements of the system that are presumably to be
used. Therefore, it leads to a system that the users to be a lot of reliable (than truly it is!).
Undependable estimation victimization applied mathematics testing is a lot of correct compared to
those of alternative strategies like ROCOF, POFOD, etc. However it’s dangerous to perform applied
mathematics testing properly. there’s no easy and repeatable manner of process operation profiles.
additionally, it’s a great deal cumbersome to get check cases for applied mathematics checking cause
the number of test cases with that the system is to be tested ought to be statistically important.

You might also like