software testing unit II final (1)
software testing unit II final (1)
The Goal of Test Planning, High Level Expectations,Intergroup Responsibilities, Test Phases, Test
Strategy, Resource Requirements, Tester Assignments, Test Schedule, Test Cases, Bug
Reporting, Metrics and Statistics.
A test plan is a detailed document which describes software testing areas and activities. It
outlines the test strategy, objectives, test schedule, required resources (human resources,
software, and hardware), test estimation and test deliverables.
The test plan is a base of every software's testing. It is the most crucial activity which ensures
availability of all the lists of planned activities in an appropriate sequence.
The test plan is a template for conducting software testing activities as a defined process that
is fully monitored and controlled by the testing manager. The test plan is prepared by the Test
Lead (60%), Test Manager(20%), and by the test engineer(20%).
Master Test Plan is a type of test plan that has multiple levels of testing. It includes a complete test
strategy.
A phase test plan is a type of test plan that addresses any one phase of the testing strategy. For
example, a list of tools, a list of test cases, etc.
Specific test plan designed for major types of testing like security testing, load testing, performance
testing, etc. In other words, a specific test plan designed for non-functional testing.
How to write a Test Plan
Making a test plan is the most crucial task of the test management process. According to IEEE 829,
follow the following seven steps to prepare a test plan.
The test plan consists of various parts, which help us to derive the entire testing activity.
Objectives: It consists of information about modules, features, test data etc., which indicate the aim
of the application means the application behavior, goal, etc.
Scope: It contains information that needs to be tested with respective of an application. The Scope can
be further divided into two parts:
In scope
Out scope
In scope: These are the modules that need to be tested rigorously (in-detail).
Example: In an application A, B, C, D features have to be developed, but the B feature has already
been designed by other companies. So the development team will purchase B from that company and
perform only integrated testing with A, B, C.
Testing Methodology: The methods that are going to be used for testing depend on application to
application. The testing methodology is decided based on the feature and application requirements.
Since the testing terms are not standard, one should define what kind of testing will be used in the
testing methodology. So that everyone can understand it.
Approach: The approach of testing different software is different. It deals with the flow of
applications for future references. It has two aspects:
High-Level Scenarios: For testing critical features high-level scenarios are written. For
Example, login to a website, booking from a website.
The Flow Graph: It is used when one wants to make benefits such as converging and
merging easy.
Example:
The testing team will get proper support from the development team.
The tester will get proper knowledge transfer from the development team.
Proper resource allocation will be given by the company to the testing department.
Risk: All the risks that can happen if the assumption is breaking. For Example, in the case of wrong
budget estimation, the cost may overrun. Some reason that may lead to risk is:
Backup/Mitigation Plan-
If any risk is involved then the company must have a backup plan, the purpose is to avoid errors.
Some points to resolve/avoid risk
Test priority is to be set for each test activity.
Managers should have leadership skills.
Training course for the testers.
Roles and Responsibilities: All the responsibilities and role of every member in a particular testing
team has to be recorded.
Example:
Test Manager: Manages the project, takes an appropriate resource and gives project direction.
Tester: Identify the testing technique, verify the test approach, and save project cost.
Scheduling: Under this, it will record the start and the end date of each and every testing-related
activity. For Example, writing test case date and ending test case date.
Defect Tracking: It is an important process in software engineering as lots of issue arises when you
develop a critical system for business. If there is any defect found while testing and that defect must
be given to the developer team. There are the following methods for the process of defect tracking:
Example: The bug can be identified using bug tracking tools such as Jira, Mantis, Trac.
Test Environment- It is the environment which the testing team will use i.e. the list of hardware and
software, while testing the application, the things which are said to be tested will be written under this
section. The installation of software is also checked under this.
Example:
Software configuration on different operating systems, such as Windows, Linux, Mac, etc.
Entry and Exit Criteria: The set of conditions that should be met in order to start any new type of
testing or to end any kind of testing.
Entry Condition:
Exit Condition:
Example: If the team member report 45% of the test cases failed, then testing will be suspended until
the developer team fixes all defects.
Test Automation: It consists of the features that are to be automated and which features are not to be
automated.
Deliverables- It is the outcome from the testing team and that is to be given to the customers at the
end of the project.
Test scripts.
Test data.
Error logs.
Test Reports.
Defect Report.
Installation Report.
It contains a test plan, defect report, automation report, assumption report, tools, and other
components that have been used for developing and maintaining the testing effort.
Templated: It is followed by every kind of report that is going to be prepared by the testing team.
Below are some of the situations that may occur if there is no test plan in place:
− Attend weekly meeting of projects and provide inputs from the Testers' perspective.
− Arrange the Hardware and software requirement for the Test Setup.
− Escalate the issues about project requirements (Software Hardware Resources) to Project Manager /
Test Manager.
− Assign task to all Testing Team members and ensure that all of them have sufficient work in the
project.
− Prepare the Agenda for the meeting for example: Weekly Team meeting etc.
− Attend the regular client call and discuss the weekly status with the client.
− Communication by means of Chat / emails etc. with the Client (If required).
− Act as the single point of contact between Development and Testers for iterations Testing and
deployment activities.
− Track and report upon testing activities including testing results test case coverage required
resources defects discovered and their status performance baselines etc.
− Assist in performing any applicable maintenance to tools used in Testing and resolve issues if any.
− Ensure content and structure of all Testing documents / artifacts is documented and maintained.
− Document implement monitor and enforce all processes and procedures for testing is established as
per standards defined by the organization.
− Log project related issues in the defect tracking tool identified for the project.
− Identify Training requirements and forward it to the Project Manager (Technical and Soft skills).
Responsibilities of a tester:
- Prepare Update the Test case document for testing of the application from all aspects.
- Conduct the Testing including Sanity and functional Execute the Test cases.
- Verify defects.
Test Phases
1.Static Testing
2.Unit Testing
3.Integration Testing
4.System Testing
5.Acceptance Testing
1.Static Testing
Static testing is a verification process used to test the application without implementing the code of
the application. And it is a cost-effective process.
To avoid the errors, we will execute Static testing in the initial stage of development because it is
easier to identify the sources of errors, and it can fix easily.
In other words, we can say that Static testing can be done manually or with the help of tools to
improve the quality of the application by finding the error at the early stage of development; that is
also called the verification process.
We can do some of the following important activities while performing static testing:
We can test the various testing activities in Static Testing, which are as follows:
BRD [Business Requirements Document]
Functional or system Requirements
Unit Use Cases
Prototype
Prototype Specification Document
Test Data
DB Fields Dictionary Spreadsheet
We required Static testing whenever we encounter the following situation while testing an application
or the software:
We need static testing to test the application as dynamic testing is time-taking process even though the
dynamic testing identifies the bug and provides some information about the bug.
When we are developing the software, we cannot completely rely on Dynamic testing as it finds the
bugs or defects of the application/software at a later stage because it will take the programmer's plenty
of time and effort to fix the bugs.
We need to perform the static testing on the software product because dynamic testing is more
expensive than static testing. Involving the test cases is expensive in dynamic testing as the test cases
have been created in the initial stages.
And we also need to preserve the implementation and validation of the test case, which takes lots of
time from the test engineers.
Static Testing Techniques
Static testing techniques offer a great way to enhance the quality and efficiency of software
development. The Static testing technique can be done in two ways, which are as follows:
Review
Static Analysis
Review
In static testing, the review is a technique or a process implemented to find the possible bugs in the
application. We can easily identify and eliminate faults and defects in the various supporting
documents such as SRS [Software Requirements Specifications] in the review process.In other words,
we can say that a review in Static Testing is that where all the team members will understand about
the project's progress.
In static testing, reviews can be divided into four different parts, which are as follows:
Informal reviews
Walkthroughs
Technical/peer review
Inspections
Informal reviews
In informal review, the document designer place the contents in front of viewers, and everyone gives
their view; therefore, bugs are acknowledged in the early stage.
Walkthrough
Generally, the walkthrough review is used to performed by a skilled person or expert to verify the
bugs. Therefore, there might not be problem in the development or testing phase.
Peer review
In Peer review, we can check one another's documents to find and resolve the bugs, which is generally
done in a team.
Inspection
In review, the inspection is essentially verifying the document by the higher authority, for example,
the verification of SRS [software requirement specifications] document.
Static Analysis
Another Static Testing technique is static analysis, which is used to contain the assessment of the code
quality, which is established by developers
We can use the different tools to perform the code's analysis and evaluation of the same.
In other words, we can say that developers' developed code is analyzed with some tools for structural
bugs, which might cause the defects.
The static analysis will also help us to identify the below errors:
Dead code
Unused variables
Endless loops
Incorrect syntax
Variable with undefined value
In static testing, Static Analysis can be further classified into three parts, which are as discuss below:
Data Flow: In static analysis, the data flow is connected to the stream processing.
Control Flow: Generally, the control flow is used to specify how the commands or instructions are
implemented.
Cyclomatic Complexity: It is the measurement of the program's complexity, which is mostly linked
to the number of independent paths in the control flow graph of the program.
CheckStyle
SourceMeter
Soot
Unit testing is a type of software testing that focuses on individual units or components of a
software system. The purpose of unit testing is to validate that each unit of the software
works as intended and meets the requirements. Unit testing is typically performed by
developers, and it is performed early in the development process before the code is integrated
and tested as a whole system.
Unit tests are automated and are run each time the code is changed to ensure that new code
does not break existing functionality. Unit tests are designed to validate the smallest possible
unit of code, such as a function or a method, and test it in isolation from the rest of the
system. This allows developers to quickly identify and fix any issues early in the
development process, improving the overall quality of the software and reducing the time
required for later testing.
Black Box Testing: This testing technique is used in covering the unit tests for input, user
interface, and output parts.
White Box Testing: This technique is used in testing the functional behavior of the system
by giving the input and checking the functionality output including the internal design
structure and code of the modules.
Gray Box Testing: This technique is used in executing the relevant test cases, test methods,
test functions, and analyzing the code performance for the modules.
Jtest
Junit
NUnit
EMMA
PHPUnit
Unit Testing allows developers to learn what functionality is provided by a unit and
how to use it to gain a basic understanding of the unit API.
Unit testing allows the programmer to refine code and make sure the module works
properly.
Unit testing enables testing parts of the project without waiting for others to be
completed
Disadvantages of Unit Testing:
Integration Testing
Integration Testing is defined as a type of testing where software modules are integrated
logically and tested as a group. A typical software project consists of multiple software
modules, coded by different programmers. The purpose of this level of testing is to expose
defects in the interaction between these software modules when they are integrated.
Integration Testing focuses on checking data communication amongst these modules. Hence
it is also termed as ‘I & T’ (Integration and Testing), ‘String Testing’ and sometimes ‘Thread
Testing’.
Although each software module is unit tested, defects still exist for various reasons like
At the time of module development, there are wide chances of change in requirements by the
clients. These new requirements may not be unit tested and hence system integration Testing
becomes necessary.
Big Bang Testing is an Integration testing approach in which all the components or modules
are integrated together at once and then tested as a unit. This combined set of components is
considered as an entity while testing. If all of the components in the unit are not completed,
the integration process will not execute.
Advantages:
Disadvantages:
Incremental Testing
In the Incremental Testing approach, testing is done by integrating two or more modules that
are logically related to each other and then tested for proper functioning of the application.
Then the other related modules are integrated incrementally and the process continues until
all the logically related modules are integrated and tested successfully.
Stubs and Drivers are the dummy programs in Integration testing used to facilitate the
software testing activity. These programs act as a substitutes for the missing models in the
testing. They do not implement the entire programming logic of the software module but they
simulate data communication with the calling module while testing.
Bottom-up Integration Testing is a strategy in which the lower level modules are tested first.
These tested modules are then further used to facilitate the testing of higher level modules.
The process continues until all modules at top level are tested. Once the lower level modules
are tested and integrated, then the next level of modules are formed.
Diagrammatic Representation:
Advantages:
Disadvantages:
Critical modules (at the top level of software architecture) which control the flow of
application are tested last and may be prone to defects.
An early prototype is not possible
Top Down Integration Testing is a method in which integration testing takes place from top
to bottom following the control flow of software system. The higher level modules are tested
first and then lower level modules are tested and integrated in order to check the software
functionality. Stubs are used for testing if some modules are not ready.
Diagrammatic Representation:
Advantages:
Disadvantages:
Sandwich Testing
Sandwich Testing is a strategy in which top level modules are tested with lower level
modules at the same time lower modules are integrated with top modules and tested as a
system. It is a combination of Top-down and Bottom-up approaches therefore it is called
Hybrid Integration Testing. It makes use of both stubs as well as drivers.
System Testing
System testing is a type of software testing that evaluates the overall functionality and
performance of a complete and fully integrated software solution. It tests if the system meets
the specified requirements and if it is suitable for delivery to the end-users. This type of
testing is performed after the integration testing and before the acceptance testing.
System testing tests the design and behavior of the system and also the expectations of the
customer. System Testing is basically performed by a testing team that is independent of the
development team that helps to test the quality of the system impartial. It has both functional
and non-functional testing. System Testing is a black-box testing. System Testing is
performed after the integration testing and before the acceptance testing.
System Testing is performed in the following steps:
Test Environment Setup: Create testing environment for the better quality testing.
Create Test Case: Generate test case for the testing process.
Execute Test Case: After the generation of the test case and the test data, test cases are
executed.
Regression Testing: It is carried out to test the side effects of the testing process.
Performance Testing: Performance Testing is a type of software testing that is carried out to
test the speed, scalability, stability and reliability of the software product or application.
Load Testing: Load Testing is a type of software Testing which is carried out to determine
the behavior of a system or software product under extreme load.
Stress Testing: Stress Testing is a type of software testing performed to check the robustness
of the system under the varying loads.
Scalability Testing: Scalability Testing is a type of software testing which is carried out to
check the performance of a software application or system in terms of its capability to scale
up or scale down the number of user request load.
JMeter
Gallen Framework
Selenium
Acceptance Testing
It is a formal testing according to user needs, requirements and business processes conducted
to determine whether a system satisfies the acceptance criteria or not and to enable the users,
customers or other authorized entities to determine whether to accept the system or not.
Acceptance Testing is the last phase of software testing performed after System Testing and
before making the system available for actual use.
Business Acceptance Testing (BAT): BAT is used to determine whether the product meets
the business goals and purposes or not. BAT mainly focuses on business profits which are
quite challenging due to the changing market conditions and new technologies so the current
implementation may have to being changed which results in extra budgets.
Contract Acceptance Testing (CAT): CAT is a contract that specifies that once the product
goes live, within a predetermined period, the acceptance test must be performed and it should
pass all the acceptance use cases. Here is a contract termed a Service Level Agreement
(SLA), which includes the terms where the payment will be made only if the Product services
are in-line with all the requirements, which means the contract is fulfilled. Sometimes, this
contract happens before the product goes live. There should be a well-defined contract in
terms of the period of testing, areas of testing, conditions on issues encountered at later
stages, payments, etc.
Regulations Acceptance Testing (RAT): RAT is used to determine whether the product
violates the rules and regulations that are defined by the government of the country where it
is being released. This may be unintentional but will impact negatively on the business.
Generally, the product or application that is to be released in the market, has to go under
RAT, as different countries or regions have different rules and regulations defined by its
governing bodies. If any rules and regulations are violated for any country then that country
or the specific region then the product will not be released in that country or region. If the
product is released even though there is a violation then only the vendors of the product will
be directly responsible.
Alpha Testing: Alpha testing is used to determine the product in the development testing
environment by a specialized testers team usually called alpha testers.
Beta Testing: Beta testing is used to assess the product by exposing it to the real end-users,
usually called beta testers in their environment. Feedback is collected from the users and the
defects are fixed. Also, this helps in enhancing the product to give a rich user experience.
This testing helps the project team to know the further requirements from the users directly as
it involves the users for testing.
Automated test execution.
It brings confidence and satisfaction to the clients as they are directly involved in the testing
process.
It is easier for the user to describe their requirement.
It covers only the Black-Box testing process and hence the entire functionality of the product
will be tested.
Test Strategy
A high-level document is used to validate the test types or levels to be executed for the product and
specify the Software Development Life Cycle's testing approach is known as Test strategy document.
Once the test strategy has been written, we cannot modify it, and it is approved by the Project
Manager, development team.
The test strategy also specifies the following details, which are necessary while we write the test
document:
In other words, we can say that it is a document, which expresses how we go about testing the
product. And the approaches can be created with the help of following aspects:
Automation or not
Resource point of view
We understand that the test strategy document is made during the requirements phase and after the
requirements have been listed.
Like other testing documents, the test strategy document also includes various components, such as:
Scope and Overview
Testing Methodology
Testing Environment Specifications
Testing Tools
Release Control
Risk Analysis
Review and Approvals
The first component of the test strategy document is Scope and Overview.
The overview of any product contains the information on who should approve, review and use
the document.
The test strategy document also specified the testing activities and phases that are needed to
be approved.
Testing Methodology
The next module in the test strategy document is Testing methodology, which is mainly used
to specify thelevels of testing, testing procedure, roles, and responsibilities of all the team
members.
The testing approach also contains the change management process involving the
modification request submission, pattern to be used, and activity to manage the request.
Above all, if the test strategy document is not established appropriately, then it might lead to
errors or mistakes in the future.
Testing tools are another vital component of the test strategy document, as it stipulates the
complete information about the test management and automation tools necessary for test
execution activity.;
For security, performance, load testing, the necessary methodologies, and tools are
defined by the details of the open-source or commercial tool and the number of users that
can be kept by it.
Release Control
Risk Analysis
The last component of the Testing strategy document is Review and Approval.
When all the related testing activities are specified in the test strategy document, it is
reviewed by the concerned people like:
System Administration Team
Project Management Team
Development Team
Business Team
Here, we are discussing some of the significant types of test strategies document:
Methodical Strategy
Reactive Strategy
Analytical strategy
Another type of test strategy is Analytical strategy, which is used to perform testing based on
requirements, and requirements are analyzed to derive the test conditions. And then tests are
designed, implemented, and performed to encounter those requirements. For example, risk-
based testing or requirements-based testing.
Even the outcomes are recorded in terms of requirements, such as requirements tested and
passed.
Standards compliant or Process compliant strategy
In this type of test strategy, the test engineer will follow the procedures or guidelines created
by a panel of industry specialists or committee standards to find test conditions, describe test
cases, and put the testing team in place.
Suppose any project follows the ScrumAgile technique. In that case, the test engineer will
generate its complete test strategy, beginning from classifying test criteria, essential test cases,
performing tests, report status, etc., around each user story.
Some good examplesof the standards-compliant process are Medical systems following US
FDA (Food and Drugs Administration) standards.
Model-based strategy
The next type of test strategy is a model-based strategy. The testing team selects the current or
expected situation and produces a model for it with the following aspects: inputs, outputs,
processes, and possible behavior.
And the models are also established based on the current data speeds, software, hardware,
infrastructure, etc.
In the regression averse strategy, the test engineer mainly emphasizes decreasing regression
risks for functional or non-functional product shares.
For example, suppose we have one web application to test the regressionissues for the
particular application. The testing team can develop the test automation for both typical and
exceptional use cases for this scenario.
And to facilitate the tests can be run whenever the application is reformed, the testing team
can use GUI-based automation tools.
Consultative strategy
The consultative strategy is used to consult key investors as input to choose the scope of test
conditions as in user-directed testing.
In order of priority, the client will provide a list of browsers and their versions, operating
systems, a list of connection types, anti-malware software, and also the contradictory list,
which they want to test the application.
As per the need of the items given in provided lists, the test engineer may use the various
testing techniques, such as equivalence partitioning
Resource Requirements
Project resources simply mean resources that are required for successful development and completion
of project. These resources can be capital, people, material, tool, or supplies that are helpful to carry
out certain tasks in project. Without these resources, it is impossible to complete project. In project
planning phase, identification of resources that are required for completion of project and how they
will be allocated is key element and very important task to do. In project management, some resources
that are required are assigned to each task of project to get job done.
There are three types of resources that are considered and are very essential for execution of project
and completion of project on time and on budget. These resources can be denoted by pyramid which
is also known as Resource Pyramid. At base of pyramid i.e. last layer, hardware and software tools are
present, then at middle layer, reusable components are present, and at top of pyramid i.e. top layer,
human resources are present. This is shown in following diagram :
When software planner wants to specify resources, they specify it using four characteristics :
Description of resource
Resource availability
Time of resource when it will be available
Duration of resource availability
Types of resources :
Human Resource –
Human plays an important role in software development process. No matter what size is and how
much complexity is there in project, if you want to perform project task in an effective manner, then
human resources are very essential. In software industry, people are assigned some organizational
positions such as manager, software developer, software testing, engineer, and so on. These positions
are according to their skills and specialty.
For small project only, single individual can perform all these roles. But for large project, team of
people works on it. The total number of people that are required for project is estimated by calculating
development effort inters of person-months.
Reusable Components –
For bringing ease in software development process or to accelerate development process software,
industry prefers to use some ready software components. The components can be defined as the
software building blocks that can be created and reused in software development process. Generally,
regardless of their type, size, or complexity, all projects need money. Managing budget for project is
one of most important tasks that all project managers have to do. The reusable resources also known
as cost resources are very helpful as they help in reducing overall cost of development. The use of
components emphasizes reusability. This is also termed as Component-Based Software Engineering.
These are actually material resources that are part of project. This type of resource should be planned
before starting development of project otherwise it way causes problems for the project.
For example, if you require certain software elements during performing task and somehow you can’t
manage to get them on time, even they could take few weeks to ship from manufacturer and this will
cause delay to your project.
Tester Assignments
Test Schedule
A test schedule includes the testing steps or tasks, the target start and end dates, and responsibilities. It
should also describe how the test will be reviewed, tracked, and approved.
The test case is defined as a group of conditions under which a tester determines whether a software
application is working as per the customer's requirements or not. Test case designing includes
preconditions, case name, input conditions, and expected result. A test case is a first level action and
derived from test scenarios.
Test case gives detailed information about testing strategy, testing process, preconditions, and
expected output. These are executed during the testing process to check whether the software
application is performing the task for that it was developed or not.
When the customer gives the business needs then, the developer starts developing and says
that they need 3.5 months to build this product.
And In the meantime, the testing team will start writing the test cases.
Once it is done, it will send it to the Test Lead for the review process.
And when the developers finish developing the product, it is handed over to the testing team.
The test engineers never look at the requirement while testing the product document because
testing is constant and does not depends on the mood of the person rather than the quality of
the test engineer.
To require consistency in the test case execution: we will see the test case and start testing the
application.
To make sure a better test coverage: for this, we should cover all possible scenarios and document
it, so that we need not remember all the scenarios again and again.
It depends on the process rather than on a person: A test engineer has tested an application during
the first release, second release, and left the company at the time of third release. As the test engineer
understood a module and tested the application thoroughly by deriving many values. If the person is
not there for the third release, it becomes difficult for the new person. Hence all the derived values are
documented so that it can be used in the future.
To avoid giving training for every new test engineer on the product: When the test engineer
leaves, he/she leaves with a lot of knowledge and scenarios. Those scenarios should be documented so
that the new test engineer can test with the given scenarios and also can write the new scenarios.
Here, we are writing a test case for the ICICI application’s Login module:
The functional test cases
Firstly, we check for which field we will write test cases and then describe accordingly.
In functional testing or if the application is data-driven, we require the input column else; it is a bit
time-consuming.
Let say it is the amount transfer module, so we are writing the functional test cases for it and then also
specifies that it is not a login feature.
The functional test case for amount transfer module is in the below Excel file:
In this, we should not write something which we already covered in the functional test cases, and
something we have written in the integration test case should not be written in the system test case
again.
We will write the system test cases for the end-to-end business flows. And we have the entire modules
ready to write the system test cases.
The method of writing a test case can be completed into the following steps, which are as below:
System study
In this, we will understand the application by looking at the requirements or the SRS, which is given
by the customer.
When the product is launched, what are the possible ways the end-user may use the software to
identify all the possible ways.
I have documented all possible scenarios in a document, which is called test design/high-level design.
Convert all the identified scenarios to test claims and group the scenarios related to their features,
prioritize the module, and write test cases by applying test case design techniques and use the standard
test case template, which means that the one which is decided for the project.
Review the test case by giving it to the head of the team and, after that, fix the review feedback given
by the reviewer.
After fixing the test case based on the feedback, send it again for the approval.
After the approval of the particular test case, store in the familiar place that is known as the test case
repository.
Bug Reporting
A bug is an error or defect in software that causes it to behave unexpectedly or produce incorrect
results. A bug may be due to an error in the design, code, or configuration of the software.
For example, Let's say you transfer money to a friend using the mobile banking app, but when you
enter the amount to transfer and click the "Send" button, the app crashes and closes. This is an error in
the transfer function of the application and will prevent you from completing the transaction. The
error can be caused by several factors, such as a memory leak in the application code or problems in
communication between the application and the bank's servers.
Introduction to Bug Report in Software Testing
A bug report is a document that identifies and describes a software problem or defect. A bug report
contains important information that developers can reproduce and fix the problem. A well-written bug
report should include relevant information such as software version, operating system, and any other
relevant information about the environment or configuration. The report should also include a clear
and concise description of the problem, including steps to reproduce it, associated files or error
messages, and assigned severity and priority.
Bug reports are important in software development and testing for several reasons.
Bug detection:
Bug reports help identify software problems or bugs, allowing developers to fix them before the
software is released to users. Identifying problems early in the development process can save time and
resources in the long run.
Bug report provides detailed information about the reproducibility of the problem, which is essential
for developers to understand and solve the problem. Adding screenshots or recordings of the user
interface is a valuable addition to the bug report. Combined with clear and detailed steps to reproduce
the error, this can provide important context for troubleshooting.
Bug prioritization:
Bug reports help prioritize issues by determining their severity and priority levels. This ensures that
critical issues are dealt with first and that less critical issues are resolved later.
Communication:
Bug reports provide a means of communication between testers and developers, allowing them to
work together to find solutions to problems.
Improve quality:
Solving bugs identified in bug reports can improve the quality of software, making it more reliable,
secure, and user-friendly.
First, identify the problem or bug you encountered in the software. Restate the problem and note the
steps you took to get to that point.
Gather information:
Gather any relevant information you can find about the problem, such as software version, operating
system, hardware configuration, and error messages or screenshots.
The title of a bug report should be short and summarize the problem in a few words.
Issue Description:
Write a detailed description of the problem, explaining what's happening, how it's happening, and
why it's a problem. Please provide the exact steps to reproduce the problem and any error messages or
logs that may help identify the root cause.
Assign a severity level to a problem based on how it affects the software and users. Use a severity
scale from low to critical, with critical indicating a major impact on the software.
Assign a priority level to an issue based on its urgency, importance, and impact on the software and
users. Use a scale from low to critical priority, which indicates that the problem needs to be addressed
immediately.
If you have relevant files (such as screenshots, error messages, or logs), attach them to the bug report.
These files help developers understand the problem and its root cause.
Once you've filed a bug report, submit it to the appropriate team or person responsible for resolving
the issue.
By following these steps, you can create a comprehensive and effective bug report that will help
developers reproduce and resolve the issue effectively.
Bug Reporting Best Practices
Before creating a bug report, make sure you can reproduce the problem consistently. This helps
developers understand the problem and its root cause.
When you describe the problem, give specific steps to reproduce it, including any input data, error
messages, or logs that may be relevant. This helps developers understand the problem and its context.
Be concise:
Write clear and concise bug reports, focusing on relevant details. Avoid providing irrelevant
information that may distract from the problem.
Assign a severity and priority level to a problem based on how it affects the software and users. This
helps prioritize issues and ensure critical issues are addressed first.
Using a standard format for bug reports, including title, description, environment, severity, priority,
retry steps, expected behavior, actual behavior, and attachments.
Test the software in multiple environments to ensure that the problem is not limited to a specific
configuration.
Avoid duplication:
Before creating a new bug report, check if the problem has already been reported. Repeated error
reports can waste time and resources.
Adding screenshots or videos helps developers understand the problem and its context, making it
easier to reproduce and solve it.
Jira:
Jira is a popular bug tracking and project management tool that allows teams to effectively track and
manage software issues, tasks, and workflows.
Bugzilla:
Bugzilla is an open source bug tracking system that allows teams to track and manage software issues,
bug reports, and fixes.
MantisBT:
MantisBT is an open source bug tracking system that allows teams to track and manage software
issues, bug reports and project workflows.
Metrics
The purpose of software testing metrics is to increase the efficiency and effectiveness of the software
testing process while also assisting in making better decisions for future testing by providing accurate
data about the testing process. A metric expresses the degree to which a system, system component, or
process possesses a certain attribute in numerical terms.
Test metrics are essential in determining the software’s quality and performance. Developers may use
the right software testing metrics to improve their productivity.
Test metrics help to determine what types of enhancements are required in order to create a defect-
free, high-quality software product.
Make informed judgments about the testing phases that follow, such as project schedule and cost
estimates.
Examine the current technology or procedure to see if it need any more changes.
Product Metrics: A product’s size, design, performance, quality, and complexity are defined by
product metrics. Developers can improve the quality of their software development by utilizing these
features.
Project Metrics: Project Metrics are used to assess a project’s overall quality. It is used to estimate a
project’s resources and deliverables, as well as to determine costs, productivity, and flaws.
The below diagram illustrates the different stages in the test metrics life cycle.
Analysis:
Stakeholders and the testing team should be informed about the requirement for metrics.
Educate the testing team on the data points that must be collected in order to process the metrics.
Evaluation:
Report:
To get the percentage execution status of the test cases, the following formula can be used:
Percentage test cases executed = (No of test cases executed / Total no of test cases written) x 100
Similarly, it is possible to calculate for other parameters also such as test cases that were not executed,
test cases that were passed, test cases that were failed, test cases that were blocked, and so on. Below
are some of the formulas:
Test Case Effectiveness = (Number of defects detected / Number of test cases run) x 100
2. Passed Test Cases Percentage: Test Cases that Passed Coverage is a metric that indicates the
percentage of test cases that pass.
Passed Test Cases Percentage = (Total number of tests ran / Total number of tests executed) x 100
3. Failed Test Cases Percentage: This metric measures the proportion of all failed test cases.
Failed Test Cases Percentage = (Total number of failed test cases / Total number of tests executed) x
100
4. Blocked Test Cases Percentage: During the software testing process, this parameter determines
the percentage of test cases that are blocked.
Blocked Test Cases Percentage = (Total number of blocked tests / Total number of tests executed) x
100
5. Fixed Defects Percentage: Using this measure, the team may determine the percentage of defects
that have been fixed.
Fixed Defects Percentage = (Total number of flaws fixed / Number of defects reported) x 100
6. Rework Effort Ratio: This measure helps to determine the rework effort ratio.
Rework Effort Ratio = (Actual rework efforts spent in that phase/ Total actual efforts spent in that
phase) x 100
7. Accepted Defects Percentage: This measures the percentage of defects that are accepted out of the
total accepted defects.
Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team / Total Defects Reported) x
100
8. Defects Deferred Percentage: This measures the percentage of the defects that are deferred for
future release.
Defects Deferred Percentage = (Defects deferred for future releases / Total Defects Reported) x 100
1 No. of requirements 5
12 Defects fixed 12
1. Percentage test cases executed = (No of test cases executed / Total no of test cases written) x 100
= 82
2. Test Case Effectiveness = (Number of defects detected / Number of test cases run) x 100
= 12.2
3. Failed Test Cases Percentage = (Total number of failed test cases / Total number of tests
executed) x 100
= (60 / 164) * 100
= 36.59
4. Blocked Test Cases Percentage = (Total number of blocked tests / Total number of tests
executed) x 100
= (4 / 164) * 100
= 2.44
5. Fixed Defects Percentage = (Total number of flaws fixed / Number of defects reported) x 100
= 60
6. Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team / Total Defects
Reported) x 100
= 75
7. Defects Deferred Percentage = (Defects deferred for future releases / Total Defects Reported) x
100
= (5 / 20) * 100
= 25
Statistics
Statistical Testing is a testing method whose objective is to work out the undependable of software
package product instead of discovering errors. check cases ar designed for applied mathematics
testing with a wholly totally different objective than those of typical testing.
Operation Profile:
Different classes of users might use a software package for various functions. for instance, a
professional may use the library automation software package to make member records, add books to
the library, etc. whereas a library member may use to software package to question regarding the
provision of the book or to issue and come books. Formally, the operation profile of a software
package may be outlined because the chance distribution of the input of a mean user. If the input to a
variety of categories{Ci} is split, the chance price of a category represents the chance of a mean user
choosing his next input from this class. Thus, the operation profile assigns a chance price Pi to every
input category Ci.
Statistical testing permits one to focus on testing those elements of the system that are presumably to
be used. the primary step of applied mathematics testing is to work out the operation profile of the
software package. a successive step is to get a group of check knowledge reminiscent of the
determined operation profile. The third step is to use the check cases to the software package and
record the time between every failure. once a statistically important range of failures are ascertained,
the undependable may be computed.
Statistical testing permits one to focus on testing elements of the system that are presumably to be
used. Therefore, it leads to a system that the users to be a lot of reliable (than truly it is!).
Undependable estimation victimization applied mathematics testing is a lot of correct compared to
those of alternative strategies like ROCOF, POFOD, etc. However it’s dangerous to perform applied
mathematics testing properly. there’s no easy and repeatable manner of process operation profiles.
additionally, it’s a great deal cumbersome to get check cases for applied mathematics checking cause
the number of test cases with that the system is to be tested ought to be statistically important.