Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit 1

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 32

Software Testing

UNIT –I
 FOUNDATIONS OF SOFTWARE TESTING 6
Why do we test Software?, Black-Box Testing and
White-Box Testing, Software Testing Life Cycle, V-
model of Software Testing, Program Correctness
and Verification, Reliability versus Safety, Failures,
Errors and Faults (Defects), Software Testing
Principles, Program Inspections, Stages of Testing:
Unit Testing, Integration Testing, System Testing
Observations about Testing

 “Testing is the process of executing a


program with the intention of finding
errors.” – Myers
 “Testing can show the presence of bugs
but never their absence.” - Dijkstra
Good Testing Practices

 A good test case is one that has a high


probability of detecting an undiscovered
defect, not one that shows that the
program works correctly
 A necessary part of every test case is a
description of the expected result
Good Testing Practices
(cont’d)
 Write test cases for valid as well as
invalid input conditions.
 Thoroughly inspect the results of each
test
 As the number of detected defects in a
piece of software increases, the
probability of the existence of more
undetected defects also increases
Good Testing Practices
(cont’d)
 Assign your best people to testing
 Ensure that testability is a key objective
in your software design
 Never alter the program to make testing
easier
 Testing, like almost every other activity,
must start with objectives
Levels of Testing

 Unit Testing
 Integration Testing
 Validation Testing
 Regression Testing
 Alpha Testing
 Beta Testing
 Acceptance Testing
Levels of Testing
Unit Testing

 Algorithms and logic


 Data structures (global and local)
 Interfaces
 Independent paths
 Boundary conditions
 Error handling
Why Integration Testing Is
Necessary
 One module can have an adverse effect
on another
 Subfunctions, when combined, may not
produce the desired major function
 Individually acceptable imprecision in
calculations may be magnified to
unacceptable levels
Why Integration Testing Is
Necessary (cont’d)
 Interfacing errors not detected in unit
testing may appear
 Timing problems (in real-time systems)
are not detectable by unit testing
 Resource contention problems are not
detectable by unit testing
Top-Down Integration
1. The main control module is used as a
driver, and stubs(A stub acts as a
small piece of code that replaces
another component during testing)
are substituted for all modules directly
subordinate to the main module.
2. Depending on the integration approach
selected (depth or breadth first),
subordinate stubs are replaced by
modules one at a time.
Top-Down Integration (cont’d)

3. Tests are run as each individual


module is integrated.
4. On the successful completion of a set
of tests, another stub is replaced with a
real module
5. Regression testing is performed to
ensure that errors have not developed
as result of integrating new modules
Problems with Top-Down
Integration
 Many times, calculations are performed in the
modules at the bottom of the hierarchy
 Stubs typically do not pass data up to the
higher modules
 Delaying testing until lower-level modules are
ready usually results in integrating many
modules at the same time rather than one at a
time
 Developing stubs that can pass data up is
almost as much work as developing the actual
module
Bottom-Up Integration

 Integration begins with the lowest-level


modules, which are combined into clusters, or
builds, that perform a specific software
subfunction
 Drivers (control programs developed as stubs)
are written to coordinate test case input and
output
 The cluster is tested
 Drivers are removed and clusters are combined
moving upward in the program structure
Problems with Bottom-Up
Integration
 The whole program does not exist until
the last module is integrated
 Timing and resource contention
problems are not found until late in the
process
Validation Testing

 Determine if the software meets all of the


requirements defined in the SRS
 Having written requirements is essential
 Regression testing is performed to determine if
the software still meets all of its requirements in
light of changes and modifications to the
software
 Regression testing involves selectively
repeating existing validation tests, not
developing new tests
Alpha and Beta Testing

 It’s best to provide customers with an


outline of the things that you would like
them to focus on and specific test
scenarios for them to execute.
 Provide with customers who are actively
involved with a commitment to fix defects
that they discover.
Acceptance Testing

 Similar to validation testing except that


customers are present or directly
involved.
 Usually the tests are developed by the
customer
Test Methods

 White box or glass box testing


 Black box testing
 Top-down and bottom-up for performing
incremental integration
 ALAC (Act-like-a-customer)
Test Types
 Functional tests
 Algorithmic tests
 Positive tests
 Negative tests
 Usability tests
 Boundary tests
 Startup/shutdown tests
 Platform tests
 Load/stress tests
Test Planning

 The Test Plan – defines the scope of the


work to be performed
 The Test Procedure – a container
document that holds all of the individual
tests (test scripts) that are to be
executed
 The Test Report – documents what
occurred when the test scripts were run
Test Plan

 Questions to be answered:
 How many tests are needed?
 How long will it take to develop those tests?
 How long will it take to execute those tests?
 Topics to be addressed:
 Test estimation
 Test development and informal validation
 Validation readiness review and formal validation
 Test completion criteria
Test Estimation

 Number of test cases required is based on:


 Testing all functions and features in the SRS
 Including an appropriate number of ALAC (Act
Like A Customer) tests including:
 Do it wrong
 Use wrong or illegal combination of inputs
 Don’t do enough
 Do nothing
 Do too much

 Achieving some test coverage goal


 Achieving a software reliability goal
Considerations in
Test Estimation
 Test Complexity – It is better to have many
small tests that a few large ones.
 Different Platforms – Does testing need to be
modified for different platforms, operating
systems, etc.
 Automated or Manual Tests – Will automated
tests be developed? Automated tests take more
time to create but do not require human
intervention to run.
Estimating Tests Required
SRS Estimated Notes
Reference Number of
Tests
Required
4.1.1 3 2 positive and 1 negative test
4.1.2 2 2 automated tests
4.1.3 4 4 manual tests
4.1.4 5 1 boundary condition, 2 error
conditions, 2 usability tests

Total 165
Estimated Test Development
Time
Estimated Number of Tests: 165
Average Test Development Time: 3.5
(person-hours/test)
Estimated Test Development Time:
577.5
(person-hours)
Estimated Test Execution
Time
Estimated Number of Tests: 165
Average Test Execution Time: 1.5
(person-hours/test)
Estimated Test Execution Time: 247.5
(person-hours)
Estimated Regression Testing (50%): 123.75
(person-hours)
Total Estimated Test Execution Time: 371.25
(person-hours)
Test Procedure

 Collection of test scripts


 An integral part of each test script is the
expected results
 The Test Procedure document should
contain an unexecuted, clean copy of
every test so that the tests may be more
easily reused
Test Report

 Completed copy of each test script with evidence


that it was executed (i.e., dated with the
signature of the person who ran the test)
 Copy of each SPR showing resolution
 List of open or unresolved SPRs
 Identification of SPRs found in each baseline
along with total number of SPRs in each baseline
 Regression tests executed for each software
baseline
Validation Test Plan
IEEE – Standard 1012-1998
1. Overview
a. Organization
b. Tasks and Schedules
c. Responsibilities
d. Tools, Techniques, Methods
2. Processes
a. Management
b. Acquisition
c. Supply
d. Development
e. Operation
f. Maintenance
Validation Test Plan
IEEE – Standard 1012-1998 (cont’d)

3. Reporting Requirements
4. Administrative Requirements
5. Documentation Requirements
6. Resource Requirements
7. Completion Criteria

You might also like