Beginner Guide To Software Testing PDF
Beginner Guide To Software Testing PDF
Beginner Guide To Software Testing PDF
Page 1
Beginners Guide To Software Testing
Table of Contents:
1. Overview ........................................................................................................ 5
2. Introduction.................................................................................................. 11
Page 2
Beginners Guide To Software Testing
7. Defect Tracking............................................................................................. 31
What is a defect?.......................................................................................... 31
Page 3
Beginners Guide To Software Testing
Six Sigma....................................................................................................... 38
ISO ................................................................................................................ 39
Page 4
Beginners Guide To Software Testing
1. Overview
As against the perception that testing starts only after the completion of coding
phase, it actually begins even before the first line of code can be written. In the life
cycle of the conventional software product, testing begins at the stage when the
specifications are written, i.e. from testing the product specifications or product spec.
Finding bugs at this stage can save huge amounts of time and money.
Once the specifications are well understood, you are required to design and execute
the test cases. Selecting the appropriate technique that reduces the number of tests
that cover a feature is one of the most important things that you need to take into
consideration while designing these test cases. Test cases need to be designed to
cover all aspects of the software, i.e. security, database, functionality (critical and
general) and the user interface. Bugs originate when the test cases are executed.
As a tester you might have to perform testing under different circumstances, i.e. the
application could be in the initial stages or undergoing rapid changes, you have less
than enough time to test, the product might be developed using a life cycle model
that does not support much of formal testing or retesting. Further, testing using
different operating systems, browsers and the configurations are to be taken care of.
Reporting a bug may be the most important and sometimes the most difficult task
that you as a software tester will perform. By using various tools and clearly
communicating to the developer, you can ensure that the bugs you find are fixed.
Using automated tools to execute tests, run scripts and tracking bugs improves
efficiency and effectiveness of your tests. Also, keeping pace with the latest
developments in the field will augment your career as a software test engineer.
Page 5
Beginners Guide To Software Testing
Following are two cases that demonstrate the importance of software quality:
- Maiden flight of the European Ariane 5 launcher crashed about 40 seconds after
takeoff
- Loss was about half a billion dollars
- Explosion was the result of a software error
- Uncaught exception due to floating-point error: conversion from a 64-bit
integer to a 16-bit signed integer applied to a larger than expected
Page 6
Beginners Guide To Software Testing
number
- Module was re-used without proper testing from Ariane 4
- Error was not supposed to happen with Ariane 4
- No exception handler
Page 7
Beginners Guide To Software Testing
• An explorer. A bit of creativity and an attitude to take risk helps the testers
venture into unknown situations and find bugs that otherwise will be looked over.
• Troubleshoot. Troubleshooting and figuring out why something doesn’t work
helps testers be confident and clear in communicating the defects to the developers.
• Posses people skills and tenacity. Testers can face a lot of resistance from
programmers. Being socially smart and diplomatic doesn't mean being indecisive. The
best testers are both-socially adept and tenacious where it matters.
• Organized. Best testers very well realize that they too can make mistakes and
don’t take chances. They are very well organized and have checklists, use files, facts
and figures to support their findings that can be used as an evidence and double-
check their findings.
• Objective and accurate. They are very objective and know what they report and so
convey impartial and meaningful information that keeps politics and emotions out of
message. Reporting inaccurate information is losing a little credibility. Good testers
make sure their findings are accurate and reproducible.
• Defects are valuable. Good testers learn from them. Each defect is an opportunity
to learn and improve. A defect found early substantially costs less when compared to
the one found at a later stage. Defects can cause serious problems if not managed
properly. Learning from defects helps – prevention of future problems, track
improvements, improve prediction and estimation.
Page 8
Beginners Guide To Software Testing
Page 9
Beginners Guide To Software Testing
Page 10
Beginners Guide To Software Testing
2. Introduction
Page 11
Beginners Guide To Software Testing
Page 12
Beginners Guide To Software Testing
Page 13
Beginners Guide To Software Testing
Page 14
Beginners Guide To Software Testing
Page 15
Beginners Guide To Software Testing
harder when a bug has a very complex life cycle i.e. when the number of times it has
been closed, re-opened, not accepted, ignored etc goes on increasing.
Page 16
Beginners Guide To Software Testing
The image below clearly shows the Bug Life Cycle and how a bug can be tracked using
Bug Tracking Tools (BTT)
Page 17
Beginners Guide To Software Testing
Page 18
Beginners Guide To Software Testing
Integration Testing
To verify interaction between system components
Prerequisite: unit testing completed on all components that compose a system.
System Testing
To verify and validate behaviors of the entire system against the original system
objectives.
Page 19
Beginners Guide To Software Testing
• Integration Testing: Testing two or more modules or functions together with the
intent of finding interface defects between the modules/functions.
• System Integration Testing: Testing of software components that have been
distributed across multiple platforms (e.g., client, web server, application server, and
database server) to produce failures caused by system integration defects (i.e. defects
involving distribution and back-
• Functional Testing: Verifying that a module functions as stated in the specification
and establishing confidence that a program does what it is supposed to do.
• End-to-end Testing: Similar to system testing - testing a complete application in a
situation that mimics real world use, such as interacting with a database, using
network communication, or interacting with other hardware, application, or system.
• Sanity Testing: Sanity testing is performed whenever cursory testing is sufficient to
prove the application is functioning according to specifications. This level of testing is
a subset of regression testing. It normally includes testing basic GUI functionality to
demonstrate connectivity to the database, application servers, printers, etc.
• Regression Testing: Testing with the intent of determining if bug fixes have been
successful and have not created any new problems.
• Acceptance Testing: Testing the system with the intent of confirming readiness of
the product and customer acceptance. Also known as User Acceptance Testing.
• Adhoc Testing: Testing without a formal test plan or outside of a test plan. With
some projects this type of testing is carried out as an addition to formal testing.
Sometimes, if testing occurs very late in the development cycle, this will be the only
kind of testing that can be performed – usually done by skilled testers. Sometimes ad
hoc testing is referred to as exploratory testing.
• Configuration Testing: Testing to determine how well the product works with a
broad range of hardware/peripheral equipment configurations as well as on different
operating systems and software.
• Load Testing: Testing with the intent of determining how well the product handles
competition for system resources. The competition may come in the form of network
traffic, CPU utilization or memory allocation.
• Stress Testing: Testing done to evaluate the behavior when the system is pushed
Page 20
Beginners Guide To Software Testing
beyond the breaking point. The goal is to expose the weak links and to determine if
the system manages to recover gracefully.
• Performance Testing: Testing with the intent of determining how efficiently a
product handles a variety of events. Automated test tools geared specifically to test
and fine-tune performance are used most often for this type of testing.
• Usability Testing: Usability testing is testing for 'user-friendliness'. A way to
evaluate and measure how users interact with a software product or site. Tasks are
given to users and observations are made.
• Installation Testing: Testing with the intent of determining if the product is
compatible with a variety of platforms and how easily it installs.
• Recovery/Error Testing: Testing how well a system recovers from crashes,
hardware failures, or other catastrophic problems.
• Security Testing: Testing of database and network software in order to keep
company data and resources secure from mistaken/accidental users, hackers, and
other malevolent attackers.
• Penetration Testing: Penetration testing is testing how well the system is
protected against unauthorized internal or external access, or willful damage. This
type of testing usually requires sophisticated testing techniques.
• Compatibility Testing: Testing used to determine whether other system software
components such as browsers, utilities, and competing software will conflict with the
software being tested.
• Exploratory Testing: Any testing in which the tester dynamically changes what
they're doing for test execution, based on information they learn as they're executing
their tests.
• Comparison Testing: Testing that compares software weaknesses and strengths to
those of competitors' products.
• Alpha Testing: Testing after code is mostly complete or contains most of the
functionality and prior to reaching customers. Sometimes a selected group of users
are involved. More often this testing will be performed in-house or by an outside
testing firm in close cooperation with the software engineering department.
• Beta Testing: Testing after the product is code complete. Betas are often widely
distributed or even distributed to the public at large.
• Gamma Testing: Gamma testing is testing of software that has all the required
features, but it did not go through all the in-house quality checks.
• Mutation Testing: A method to determine to test thoroughness by measuring the
extent to which the test cases can discriminate the program from slight variants of
the program.
• Independent Verification and Validation (IV&V): The process of exercising
software with the intent of ensuring that the software system meets its requirements
and user expectations and doesn't fail in an unacceptable manner. The individual or
group doing this work is not part of the group or organization that developed the
software.
Page 21
Beginners Guide To Software Testing
• Pilot Testing: Testing that involves the users just before actual release to ensure
that users become familiar with the release contents and ultimately accept it.
Typically involves many users, is conducted over a short period of time and is tightly
controlled. (See beta testing)
• Parallel/Audit Testing: Testing where the user reconciles the output of the new
system to the output of the current system to verify the new system performs the
operations correctly.
• Glass Box/Open Box Testing: Glass box testing is the same as white box testing. It
is a testing approach that examines the application's program structure, and derives
test cases from the application's program logic.
• Closed Box Testing: Closed box testing is same as black box testing. A type of
testing that considers only the functionality of the application.
• Bottom-up Testing: Bottom-up testing is a technique for integration testing. A test
engineer creates and uses test drivers for components that have not yet been
developed, because, with bottom-up testing, low-level components are tested first.
The objective of bottom-up testing is to call low-level components first, for testing
purposes.
• Smoke Testing: A random test conducted before the delivery and after complete
testing.
Testing Terms
• Bug: A software bug may be defined as a coding error that causes an unexpected
defect, fault or flaw. In other words, if a program does not perform as intended, it is
most likely a bug.
• Error: A mismatch between the program and its specification is an error in the
program.
• Defect: Defect is the variance from a desired product attribute (it can be a wrong,
missing or extra data). It can be of two types – Defect from the product or a variance
from customer/user expectations. It is a flaw in the software system and has no
impact until it affects the user/customer and operational system. 90% of all the
defects can be caused by process problems.
Page 22
Beginners Guide To Software Testing
Page 23
Beginners Guide To Software Testing
Page 24
Beginners Guide To Software Testing
Purpose
To describe the scope, approach, resources, and schedule of the testing activities. To
Page 25
Beginners Guide To Software Testing
identify the items being tested, the features to be tested, the testing tasks to be
performed, the personnel responsible for each task, and the risks associated with this
plan.
OUTLINE
A test plan shall have the following structure:
• Testing tasks
• Environmental needs
• Responsibilities
• Staffing and training needs
• Schedule
• Risks and contingencies
• Approvals
Page 26
Beginners Guide To Software Testing
General Guidelines
As a tester, the best way to determine the compliance of the software to
requirements is by designing effective test cases that provide a thorough test of a
unit. Various test case design techniques enable the testers to develop effective test
cases. Besides, implementing the design techniques, every tester needs to keep in
mind general guidelines that will aid in test case design:
a. The purpose of each test case is to run the test in the simplest way possible.
[Suitable techniques - Specification derived tests, Equivalence partitioning]
b. Concentrate initially on positive testing i.e. the test case should show that the
software does what it is intended to do. [Suitable techniques - Specification derived
tests, Equivalence partitioning, State-transition testing]
c. Existing test cases should be enhanced and further test cases should be designed
to show that the software does not do anything that it is not specified to do i.e.
Negative Testing [Suitable techniques - Error guessing, Boundary value analysis,
Internal boundary value testing, State-transition testing]
d. Where appropriate, test cases should be designed to address issues such as
performance, safety requirements and security requirements [Suitable techniques -
Specification derived tests]
e. Further test cases can then be added to the unit test specification to achieve
specific test coverage objectives. Once coverage tests have been designed, the test
procedure can be developed and the tests executed [Suitable techniques - Branch
testing, Condition testing, Data definition-use testing, State-transition testing]
Page 27
Beginners Guide To Software Testing
Specification derived
- tests - Branch Testing - Error guessing
Equivalence
- partitioning - Condition Testing
Boundary Value
- Analysis - Data Definition - Use Testing
- Internal boundary value
- State-Transition Testing testing
Page 28
Beginners Guide To Software Testing
For example, a field is required to accept amounts of money between $0 and $10. As
a tester, you need to check if it means up to and including $10 and $9.99 and if $10 is
acceptable. So, the boundary values are $0, $0.01, $9.99 and $10.
Now, the following tests can be executed. A negative value should be rejected, 0
should be accepted (this is on the boundary), $0.01 and $9.99 should be accepted,
null and $10 should be rejected. In this way, it uses the same concept of partitions as
equivalence partitioning.
State Transition Testing
As the name suggests, test cases are designed to test the transition between the
states by creating the events that cause the transition.
Branch Testing
In branch testing, test cases are designed to exercise control flow branches or
decision points in a unit. This is usually aimed at achieving a target level of Decision
Coverage. Branch Coverage, need to test both branches of IF and ELSE. All branches
and compound conditions (e.g. loops and array handling) within the branch should be
exercised at least once.
Condition Testing
The object of condition testing is to design test cases to show that the individual
components of logical conditions and combinations of the individual components are
correct. Test cases are designed to test the individual elements of logical expressions,
both within branch conditions and within other expressions in a unit.
Data Definition – Use Testing
Data definition-use testing designs test cases to test pairs of data definitions and
uses. Data definition is anywhere that the value of a data item is set. Data use is
anywhere that a data item is read or used. The objective is to create test cases that
will drive execution through paths between specific definitions and uses.
Internal Boundary Value Testing
In many cases, partitions and their boundaries can be identified from a functional
specification for a unit, as described under equivalence partitioning and boundary
value analysis above. However, a unit may also have internal boundary values that
can only be identified from a structural specification.
Page 29
Beginners Guide To Software Testing
Error Guessing
It is a test case design technique where the testers use their experience to guess the
possible errors that might occur and design test cases accordingly to uncover them.
Using any or a combination of the above described test case design techniques; you
can develop effective test cases.
Page 30
Beginners Guide To Software Testing
7. Defect Tracking
What is a defect?
As discussed earlier, defect is the variance from a desired product attribute (it can be
a wrong, missing or extra data). It can be of two types – Defect from the product or a
variance from customer/user expectations. It is a flaw in the software system and has
no impact until it affects the user/customer and operational system.
Page 31
Beginners Guide To Software Testing
User Interface Defects: As the name suggests, the bugs deal with problems related to
UI are usually considered less severe.
Examples:
- Improper error/warning/UI messages
- Spelling mistakes
- Alignment problems
Once the test cases are developed using the appropriate techniques, they are
executed which is when the bugs occur. It is very important that these bugs be
reported as soon as possible because, the earlier you report a bug, the more time
remains in the schedule to get it fixed.
Simple example is that you report a wrong functionality documented in the Help file a
few months before the product release, the chances that it will be fixed are very high.
If you report the same bug few hours before the release, the odds are that it won’t be
fixed. The bug is still the same though you report it few months or few hours before
the release, but what matters is the time.
It is not just enough to find the bugs; these should also be reported/communicated
clearly and efficiently, not to mention the number of people who will be reading the
defect.
Defect tracking tools (also known as bug tracking tools, issue tracking tools or
problem trackers) greatly aid the testers in reporting and tracking the bugs
found in software applications. They provide a means of consolidating a key
element of project information in one place. Project managers can then see
which bugs have been fixed, which are outstanding and how long it is taking to
fix defects. Senior management can use reports to understand the state of the
development process.
Page 32
Beginners Guide To Software Testing
Page 33
Beginners Guide To Software Testing
Page 34
Beginners Guide To Software Testing
Page 35
Beginners Guide To Software Testing
Approaches to Automation
There are three broad options in Test Automation:
Fully manual testing has the benefit of being relatively cheap and effective. But as
quality of the product improves the additional cost for finding further bugs becomes
more expensive. Large scale manual testing also implies large scale testing teams with
the related costs of space overhead and infrastructure. Manual testing is also far
more responsive and flexible than automated testing but is prone to tester error
through fatigue.
Fully automated testing is very consistent and allows the repetitions of similar tests
at very little marginal cost. The setup and purchase costs of such automation are very
high however and maintenance can be equally expensive. Automation is also
relatively inflexible and requires rework in order to adapt to changing requirements.
Partial Automation incorporates automation only where the most benefits can be
achieved. The advantage is that it targets specifically the tasks for automation and
thus achieves the most benefit from them. It also retains a large component of
manual testing which maintains the test team’s flexibility and offers redundancy by
backing up automation with manual testing. The disadvantage is that it obviously
does not provide as extensive benefits as either extreme solution.
Page 36
Beginners Guide To Software Testing
Page 37
Beginners Guide To Software Testing
Six Sigma
Six Sigma is a quality management program to achieve "six sigma" levels of quality. It
was pioneered by Motorola in the mid-1980s and has spread too many other
manufacturing companies, notably General Electric Corporation (GE).
Page 38
Beginners Guide To Software Testing
Six Sigma is a rigorous and disciplined methodology that uses data and statistical
analysis to measure and improve a company's operational performance by identifying
and eliminating "defects" from manufacturing to transactional and from product to
service. Commonly defined as 3.4 defects per million opportunities, Six Sigma can be
defined and understood at three distinct levels: metric, methodology and
philosophy...
Training Sigma processes are executed by Six Sigma Green Belts and Six Sigma Black
Belts, and are overseen by Six Sigma Master Black Belts.
ISO
ISO - International Organization for Standardization is a network of the national
standards institutes of 150 countries, on the basis of one member per country, with a
Central Secretariat in Geneva, Switzerland, that coordinates the system. ISO is a non-
governmental organization. ISO has developed over 13, 000 International Standards
on a variety of subjects.
Page 39
Beginners Guide To Software Testing
Page 40
Beginners Guide To Software Testing
1. The best programmers are up to 28 times better than the worst programmers.
2. New tools/techniques cause an initial LOSS of productivity/quality.
3. The answer to a feasibility study is almost always “yes”.
4. A May 2002 report prepared for the National Institute of Standards and
Technologies (NIST)(1) estimate the annual cost of software defects in the United
States as $59.5 billion.
5. Reusable components are three times as hard to build
6. For every 25% increase in problem complexity, there is a 100% increase in solution
complexity.
7. 80% of software work is intellectual. A fair amount of it is creative. Little of it is
clerical.
8. Requirements errors are the most expensive to fix during production.
9. Missing requirements are the hardest requirement errors to correct.
10. Error-removal is the most time-consuming phase of the life cycle.
11. Software is usually tested at best at the 55-60% (branch) coverage level.
12. 100% coverage is still far from enough.
13. Rigorous inspections can remove up to 90% of errors before the first test case is
run.
15. Maintenance typically consumes 40-80% of software costs. It is probably the most
important life cycle phase of software.
16. Enhancements represent roughly 60% of maintenance costs.
17. There is no single best approach to software error removal.
Happy Testing!!!
Page 41