Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chapter 4

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 47

Software Testing Strategies And

Methods
Contents:
Software Testing Strategies And Methods
4.1 Software Testing Fundamentals
 Definition of Software Testing
 Concept of - Good Test, Successful Test, Testing strategies,
Test Plan, Test Cases, Test Data.
4.2 Characteristics of Testing Strategies
4.3 Software Verification and Validation (V&V) - Concept and
difference between these two.
4.4 Testing Strategies
 Unit Testing  Integration Testing
 Top-Down Approach  Bottom-up Approach
 Regression Testing  Smoke Testing
4.5 Alpha and Beta Testing ( Concept and differences)

4.6 System Testing


Concept of System Testing
Types ( Recovery, Security, Stress, Performance Testing ) with
examples

4.7 Concept of White-box and Black-Box Testing

4.8 Debugging
Concept and need of Debugging
Characterstics of bugs

4.9 Debugging Strategies


Concept of Brute Force, Back Tracking
Software testing fundamentals
• Takes place during software development process.

• Ensures quality of s/w product.

• Readymade packages and tools are available for


implementation of testing.

• The objective of testing is to search errors in program.

• A software engineer must design software which


should be flexible to test.
Objectives of software testing
1. Find out errors and fix them.
2. Good testing has high probability to find out errors in the
program which could not be yet discovered.
3. To fulfill users users’ requirement regarding product.
4. To enforce standards.
5. To support s/w quality assurance.
6. Check operability and flexibility of s/w.
7. Check reliability(the ability of a system or component to perform
its required functions under stated conditions for a specified
period of time.) of software product.
Testability
• It is an ability of software to get tested.
Testability can be defined as “how easily a
computer program can be tested”.

• Characteristics that made software testable:


1.Operability 2.Observability
3.Controllability 4.Simplicity
5.Stability 6.Understability
7.Reliability 8.Safety & security
1. Operability: It refers to ability of getting
Operated easily
Operability make it sure that :
a)System will have less number of bugs.
b)Bugs wont block the execution of tests
c)The product is well known at functional state.
this allows simultaneous development.
2.Obsevability:Its nothing but “what you can see is
what you can test”.
1.different output is generated for each &every
input.
2.In observability , while execution of s/w, various
system states &variables can be checked out &
also we can find out their values
3.Prevous system states as well as variables can be
checked out and also their values can be found out.
4.Each an every factor that affects the output is clearly
visible.
5.Wrong output can be easily found out.
6.Self testing: automatically detecting internal errors.
7.internal errors reporting is done automatically.
Controllability:
• The more is the control ,the testing can be more
automated &optimized
• Benefits:
• Combination of inputs can be used to generate all
possible types of output.
• The whole code is executable through some
combination of input.
• Testing team member can directly control variables,
states of s/w as well as h/w.
• Input & output formats are consistent & structured.
• As per convenience, specifying , automation &
reproduction of the tests are possible.
4.Simplicity:
• States that , the less there is to test, with a great speed
we can test it.
• Types:
1) Functional simplicity
2) Structural simplicity 3)Code simplicity
5. Stability:
According to stability feature, if the changes are less then
there will be fewer disruption while testing.
Changes to the software are:
1.Infrequent
2.Controlled
3.Don’t cancel existing tests.
6.Understandability:
• Ability to understand the software.
• According to it the more information or data we have ,the
smarter we can test.
• Understandability ensures the following:
1)s/w design properly understood.
2)Dependencies between internal & external, also shared
components are well known.
3)Changes made in design are time to time communicated.
4)Technical documentation is
• Instantly accessible Well organized
• Specific & detailed accurate
7. Reliability
8.Safety & security
Definition of software testing:
• Software specifications, its design & code
generation can be ultimately reviewed by testing as
testing is the most important element of SQA.

• The process of executing the programs with an


intended to find out the errors is nothing but
software testing.

• software testing includes certain activities which


evaluates attributes and capabilities of s/w program
to check whether it gives desired result or not.
Concept of good test:
1) A good test has a high probability of finding an error.(tester
should know the software thoroughly & to develop an
imaginative picture of how the s/w might fail.)

2) A good test is no redundant . Because of very limited testing


time & resources, the duplication of the tests should not be
done. Every test should have a unique purpose.

3) A good test should be “best of breed“. there may exist a group


of tests where the best test should be chosen which will find out
maximum errors.

4) A good test should be neither too simple nor too complex. We


may wish to combine multiple tests. But it may become
Concept of successful test
• The important objective of the testing which makes a
tests successful test:
1. The testing is process of executing a program with an
intension of finding out errors.
2. A good test has very high probability of searching
errors that are not yet found.
• By following the objectives stated above, it will
uncover all errors in the software.
• The benefit of successful testing is that s/w functions
appear to be working according to design &
specification of s/w.
• Successful s/w testing leads to increase in the
reliability & quality of the s/w.
Tools used for testing:
1) Test plans 2)Test cases 3)Test data

Test plan:
• A s/w test plan is a document that contains the strategy
that will be used to verify as well as make sure that s/w
product adheres to its design specifications & other
requirements.

• Test plan may contain the following tests:


1)Design verification test 2)Development test
3) Acceptance test 4)service/repair test
5)Regression test
• Test plan format varies organization to organization.

• Important aspect is that it should contain three


important elements like test coverage, test methods &
test responsibilities.

• Test coverage is decided as per the design & specification


of the s/w product.

• Test method in test plan guides how test coverage will be


implemented.

• Test responsibilities limit what test methods can be used


at each stage of product life.
• Test cases:
It is set of conditions under which a testing team will
determine whether an s/w system is working correctly or
not.
• Categories:
1)Formal Test cases: it is test where input is known &output is
an expected, which is worked out before the test is
executed.

2)Informal Test cases: used in scenario testing, which are


generally of multiple steps and not written down like formal
tests.

3)Typical written test cases: contains the test case data


documented in detail.
Test data:

• Data which have been specifically identified for


use in tests.
• Purposes:
1)It can be used for inputting data to tests to
produce the expected o/p.

2)To check an ability/behavior of s/w program to


deal with unexpected ,unusual, extreme
&exceptional i/p.

3)Include data which describes details about the


Characteristics of testing strategies
• Testing strategy need to specify requirements of
the product in a measurable way before testing
starts.

• testing objective should be explicitly mentioned


in testing strategy.

• A good testing strategy develops a profile for


every category of the user by understanding the
users of the s/w.
• Testing strategy should develop a testing plan that
focuses on “rapid cycle testing”.

• A robust s/w has to be built by testing strategy


that is capable of testing itself

• For expected results FTR need to be conducted to


check the testing strategy & test cases themselves.

• For the testing process, a continuous


improvement approach should be used by testing
strategy
verification validation
• Set of activities which • Validation is different set
ensures that software of activities which
correctly implements a ensures that the s/w that
specific function. has been built is
• It evaluates plan, traceable to customer.
documents,
code ,requirements and • It evaluates product itself.
specification etc.
• Verification takes place • validation is next step to
before validation Verification
• The i/p of Verification could
be check list, review of
• The i/p is actual testing
meetings, plans
of the product.
Testing strategies
• System Testing

• Validation Testing

• Integration Testing

• Unit Testing

• Code

• Design

• Requirements

• System Engineering
• In the beginning requirement analysis takes place. After that
there is s/w design. After design phase the actual development
that is coding takes place.

• Unit testing begins at the vertex of the spiral model & focuses or
works on each unit of the s/w program known as source code.

• Integration testing mainly focuses on design & construction of


s/w architecture.

• at validation testing requirements are established as part of s/w


requirement analysis. These requirements are validated against
the s/w has been constructed .

• In system testing the s/w & other system elements are tested as
a whole unit.
s/w testing steps
• Unit testing:
• Focuses on each component individually.
• Make sure that every unit functions properly.
• Integration testing:
• Focuses on issues related to construction & verification.
• Test case design techniques that focus on input&
output are very common during integration.
• High order tests:
• For purpose of integration testing
• Validation testing:
• It checks whether s/w satisfies all performance ,
functional & behavioral needs.
Unit Testing

module
to be
tested
results

software interface
engineer
local data structures

boundary conditions
independent paths
error handling paths
test cases

26
• Unit testing is all about verification efforts that are
taken on smallest unit of s/w.
• The smallest unit of design could be module.
• Aspect of program testing are
• Local data structures(local variables ):
Checked to make sure that data stored in temporary
manner maintains its integrity while execution.
• All independent paths are checked & are exercised
through control structure to make sure that every
statement in a module is at least executed for once.
• Boundary conditions(array):
Tested to make sure that the module works or
operates within boundaries that are established
Unit Test Environment

driver
interface
local data structures

Module boundary conditions


independent paths
error handling paths

stub stub

test cases

RESULTS 28
Unit test procedure:
The design of unit tests can be performed before coding
begins/after source code has been generated.
• Driver
– A simple main program that accepts test case data,
passes such data to the component being tested, and
prints the returned results
• Stubs
– Serve to replace modules that are subordinate to (called
by) the component to be tested
– It uses the module’s exact interface, may do minimal data
manipulation, provides verification of entry, and returns
control to the module undergoing testing
• Drivers and stubs both represent overhead
– Both must be written but don’t constitute part of the
installed software product 29
Integration Testing
• Defined as a systematic technique for
constructing the software architecture
– At the same time integration is occurring, conduct
tests to uncover errors associated with interfaces
• Objective is to take unit tested modules and
build a program structure based on the
prescribed design
• Two Approaches
– Non-incremental Integration Testing
– Incremental Integration Testing
30
Non-incremental
Integration Testing

• Commonly called the “Big Bang” approach


• All components are combined in advance
• The entire program is tested as a whole
• Chaos results
• Many seemingly-unrelated errors are encountered
• Correction is difficult because isolation of causes is
complicated
• Once a set of errors are corrected, more errors
occur, and testing appears to enter an endless
loop
31
Incremental Integration Testing
• Three kinds
– Top-down integration
– Bottom-up integration
– Sandwich integration
• The program is constructed and tested in
small increments
• Errors are easier to isolate and correct
• Interfaces are more likely to be tested
completely
• A systematic test approach is applied
32
Top-down Integration

• Modules are integrated by moving downward through


the control hierarchy, beginning with the main module
• Subordinate modules are incorporated in either a
depth-first or breadth-first fashion
– DF: All modules on a major control path are integrated
– BF: All modules directly subordinate at each level are
integrated
• Advantages
– This approach verifies major control or decision points early in
the test process
• Disadvantages
– Stubs need to be created to substitute for modules that have
not been built or tested yet; this code is later discarded
– Because stubs are used to replace lower level modules, no
significant data flow can occur until much later in the
integration/testing process
33
Bottom-up Integration
• Integration and testing starts with the most atomic
modules in the control hierarchy
• Advantages
– This approach verifies low-level data processing
early in the testing process
– Need for stubs is eliminated
• Disadvantages
– Driver modules need to be built to test the lower-
level modules; this code is later discarded or
expanded into a full-featured version
– Drivers inherently do not contain the complete
algorithms that will eventually use the services of
the lower-level modules; consequently, testing may
be incomplete or more testing may be needed later
when the upper level modules are available 34
Regression Testing
• Each new addition or change to baselined software may cause
problems with functions that previously worked flawlessly
• Regression testing re-executes a small subset of tests that
have already been conducted
– Ensures that changes have not propagated unintended
side effects
– Helps to ensure that changes do not introduce unintended
behavior or additional errors
– May be done manually or through the use of automated
capture/playback tools
• Regression test suite contains three different classes of test
cases
– A representative sample of tests that will exercise all
software functions
– Additional tests that focus on software functions that are
likely to be affected by the change
– Tests that focus on the actual software components that
have been changed 35
Smoke Testing
• Taken from the world of hardware
– Power is applied and a technician checks for sparks, smoke, or
other dramatic signs of fundamental failure
• Designed as a pacing mechanism for time-critical projects
– Allows the software team to assess its project on a frequent
basis
• Includes the following activities
– The software is compiled and linked into a build
– A series of breadth tests is designed to expose errors that will
keep the build from properly performing its function
• The goal is to uncover “show stopper” errors that have the
highest likelihood of throwing the software project behind
schedule
– The build is integrated with other builds and the entire
product is smoke tested daily
• Daily testing gives managers and practitioners a realistic
assessment of the progress of the integration testing
36
Benefits of Smoke Testing
• Integration risk is minimized
– Daily testing uncovers incompatibilities and show-stoppers
early in the testing process, thereby reducing schedule
impact

• The quality of the end-product is improved


– Smoke testing is likely to uncover both functional errors and
architectural and component-level design errors

• Error diagnosis and correction are simplified


– Smoke testing will probably uncover errors in the newest
components that were integrated

• Progress is easier to assess


– As integration testing progresses, more software has been
integrated and more has been demonstrated to work
– Managers get a good indication that progress is being made
37
Alpha and Beta Testing
• Alpha testing
– Conducted at the developer’s site by end users
– Software is used in a natural setting with developers
watching intently
– Testing is conducted in a controlled environment
• Beta testing
– Conducted at end-user sites
– Developer is generally not present
– It serves as a live application of the software in an
environment that cannot be controlled by the developer
– The end-user records all problems that are encountered
and reports these to the developers at regular intervals
• After beta testing is complete, software engineers
make software modifications and prepare for release
of the software product to the entire customer base
38
System Testing Different Types
• Recovery testing
– Tests for recovery from system faults
– Forces the software to fail in a variety of ways and verifies that recovery
is properly performed
– Tests reinitialization, check pointing mechanisms, data recovery, and
restart for correctness
• Security testing
– Verifies that protection mechanisms built into a system will, in fact,
protect it from improper access
• Stress testing
– Executes a system in a manner that demands resources in abnormal
quantity, frequency, or volume
• Performance testing
– Tests the run-time performance of software within the context of an
integrated system
– Often coupled with stress testing and usually requires both hardware
and software instrumentation
– Can uncover situations that lead to degradation and possible system
failure
39
Software Testing
White-Box Testing Black-Box Testing

requirements

output

... our goal is to ensure that all


statements and conditions have
input
been executed at least once ... events

40
Debugging Process
• Debugging occurs as a consequence of successful testing
• It is still very much an art rather than a science
• Good debugging ability may be an innate human trait
• Large variances in debugging ability exist
• The debugging process begins with the execution of a
test case
• Results are assessed and the difference between
expected and actual performance is encountered
• This difference is a symptom of an underlying cause that
lies hidden
• The debugging process attempts to match symptom with
cause, thereby leading to error correction
41
Why is Debugging so Difficult?
• The symptom and the cause may be
geographically remote
• The symptom may disappear (temporarily) when
another error is corrected
• The symptom may actually be caused by
nonerrors (e.g., round-off accuracies)
• The symptom may be caused by human error that
is not easily traced
(continued on next slide)

42
Why is Debugging so Difficult?
(continued)

• The symptom may be a result of timing problems,


rather than processing problems
• It may be difficult to accurately reproduce input
conditions, such as asynchronous real-time
information
• The symptom may be intermittent such as in
embedded systems involving both hardware and
software
• The symptom may be due to causes that are
distributed across a number of tasks running on
different processes 43
Debugging Strategies
• Objective of debugging is to find and correct the
cause of a software error
• Bugs are found by a combination of systematic
evaluation, intuition, and luck
• Debugging methods and tools are not a substitute for
careful evaluation based on a complete design model
and clear source code
• There are three main debugging strategies
– Brute force
– Backtracking
– Cause elimination
44
Strategy #1: Brute Force
• Most commonly used and least efficient method
• Used when all else fails
• Involves the use of memory dumps, run-time traces,
and output statements
• Leads many times to wasted effort and time

45
Strategy #2: Backtracking
• Can be used successfully in small programs
• The method starts at the location where a symptom
has been uncovered
• The source code is then traced backward (manually)
until the location of the cause is found
• In large programs, the number of potential backward
paths may become unmanageably large

46
Strategy #3: Cause Elimination
• Involves the use of induction or deduction and introduces the
concept of binary partitioning
– Induction (specific to general): Prove that a specific starting
value is true; then prove the general case is true
– Deduction (general to specific): Show that a specific
conclusion follows from a set of general premises
• Data related to the error occurrence are organized to isolate
potential causes
• A cause hypothesis is devised, and the aforementioned data
are used to prove or disprove the hypothesis
• Alternatively, a list of all possible causes is developed, and tests
are conducted to eliminate each cause
• If initial tests indicate that a particular cause hypothesis shows
promise, data are refined in an attempt to isolate the bug

47

You might also like