Software Testing An Overview
Software Testing An Overview
Software Testing An Overview
Confusing or misleading data Software usability by end users & Obsolete Software Inconsistent processing Unreliable results or performance Inadequate support of business needs Incorrect or inadequate interfaces with other systems Inadequate performance and security controls Incorrect file handling
Objectives of testing
Executing
a program with the intent of finding an error. To check if the system meets the requirements and be executed successfully in the Intended environment. To check if the system is Fit for purpose. To check if the system does what it is expected to do.
Objectives of testing
A
good test case is one that has a probability of finding an as yet undiscovered error. A successful test is one that uncovers a yet undiscovered error. A good test is not redundant. A good test should be best of breed. A good test should neither be too simple nor too complex.
Find bugs as early as possible and make sure they get fixed. To understand the application well. Study the functionality in detail to find where the bugs are likely to occur. Study the code to ensure that each and every line of code is tested. Create test cases in such a way that testing is done to uncover the hidden bugs and also ensure that the software is usable and reliable
Validation - typically involves actual testing and takes place after verifications are completed. Validation and Verification process continue in a cycle till the software becomes defects free.
TESTABILITY
Operability Observe-ability Controllability Decomposability Stability Understandability
Action
Do
Check
PLAN (P): Device a plan. Define your objective and determine the strategy and supporting methods required to achieve that objective. DO (D): Execute the plan. Create the conditions and perform the necessary training to execute the plan. CHECK (C): Check the results. Check to determine whether work is progressing according to the plan and whether the results are obtained.
ACTION (A): Take the necessary and appropriate action if checkup reveals that the work is not being performed according to plan or not as anticipated.
QUALITY PRINCIPLES
Quality - the most important factor affecting an organizations long-term performance. Quality - the way to achieve improved productivity and competitiveness in any organization. Quality - saves. It does not cost. Quality - is the solution to the problem, not a problem.
Cost of Quality
Prevention Cost Amount spent before the product is actually built. Cost incurred on establishing methods and procedures, training workers, acquiring tools and planning for quality.
Appraisal cost Amount spent after the product is built but before it is shipped to the user. Cost of inspection, testing, and reviews.
Failure Cost
Amount spent to repair failures. Cost associated with defective products that have been delivered to the user or moved into production, costs involve repairing products to make them fit as per requirement.
Quality Assurance
A planned and systematic set of activities necessary to provide adequate confidence that requirements are properly established and products or services conform to specified requirements.
Quality Control
The process by which product quality is compared with applicable standards; and the action taken when non-conformance is detected.
An activity that establishes An activity which verifies if and evaluates the processes the product meets preto produce the products. defined standards.
Responsibilities of QA and QC
QA is the responsibility of the entire team. Prevents the introduction of issues or defects QA evaluates whether or not quality control is working for the primary purpose of determining whether or not there is a weakness in the process. QC is the responsibility of the tester. Detects, reports and corrects defects
QC evaluates if the application is working for the primary purpose of determining if there is a flaw / defect in the functionalities.
Responsibilities of QA and QC
QA improves the process that is applied to multiple products that will ever be produced by a process.
QA personnel should not perform quality control unless doing it to validate quality control is working.
SEI CMM
Software Engineering Institute (SEI) developed Capability Maturity Model (CMM) CMM describes the prime elements - planning, engineering, managing software development and maintenance CMM can be used for Software process improvement Software process assessment Software capability evaluations
Defined Level 3
Predictable Process Managed Level 4 Optimizing Level 5
Design
The output of SRS is the input of design phase.
Two types of design High Level Design (HLD) Low Level Design (LLD)
Top-down approach
Bottom-Up Approach
Coding Developers use the LLD document and write the code in the programming language specified.
Testing The testing process involves development of a test plan, executing the plan and documenting the test results. Implementation Installation of the product in its operational environment.
Maintenance
After the software is released and the client starts using the software, maintenance phase is started. 3 things happen - Bug fixing, Upgrade, Enhancement Bug fixing bugs arrived due to some untested scenarios. Upgrade Upgrading the application to the newer versions of the software.
PROTOTYPE MODEL
INCREMENTAL MODEL EVOLUTIONARY DEVELOPMENT MODEL
Project Management
Project Staffing
Project Staffing
Project
Staff
Project Planning
Plan Description
Quality plan
Validation plan
Configuration Describes the configuration management management plan procedures and structures to be used. Maintenance plan Predicts the maintenance requirements of the system/ maintenance costs and efforts required.
Staff Describes how the skills and experience of development plan the project team members will be developed.
Project Scheduling
Scheduling problems
RISK MANAGEMENT
Risk identification Risk Analysis Risk Planning Risk Monitoring
Risk
Staff turnover Management change
Hardware unavailability Requirements change
Risk type
Project Project
Description
Experienced staff will leave the project before it is finished. There will be a change of organizational management with different priorities.
Hardware which is essential for the project will not be delivered on schedule. There will be a larger number of changes to the requirements than anticipated.
Project
Risk
Specification delays
Risk type
Project & Product
Description
Specifications of essential interfaces are not available on schedule. The size of the system has been under estimated. CASE tools which support the project do not perform as anticipated. The underlying technology on which the system is built is superseded by new technology. A competitive product is marketed before the system is completed.
Size under Project & estimate Product CASE tool under Product performance
Technology change Product competition Business
Business
Configuration Management
PC version VMS version Initial system DEC version Sun version Unix version
CM Planning
Documents, required for
It
Change Management
Keeping
and managing the changes and ensuring that they are implemented in the most cost-effective way.
Records change required Change suggested by Reason why change was suggested Urgency of change Records change evaluation Impact analysis Change cost Recommendations(system maintenance staff)
for system versions and plan when new system version is to be produced.
Ensure
that version management procedures and tools are properly applied and to plan and distribute new system releases.
Versions/Variants/Releases
Variant An instance of a system which is
functionally identical but non functionally distinct from other instances of a system.
Requirements study
Test Execution
Test Closure Test Process Analysis
Requirements study
Testing
Cycle starts with the study of clients requirements. the requirements is very essential for testing the product.
Understanding of
Test Execution
Test Closure
Testing Levels
Unit testing
The most micro scale of testing. Tests done on particular functions or code modules. Requires knowledge of the internal program design and code. Done by Programmers (not by testers).
Unit testing
Objectives To test the function of a program or unit of code such as a program or module To test internal logic To verify internal design To test path & conditions coverage To test exception conditions & error handling When After modules are coded Input Internal Application Design Master Test Plan Unit Test Plan Output Unit Test Report
Who
Methods Tools
Developer
White Box testing techniques Test Coverage techniques Debug Re-structure Code Analyzers Path/statement coverage Testing Methodology Effective use of tools
tools
Education
Integration Testing
To technically verify proper interfacing between modules, and within sub-systems After modules are unit tested
When
Input
Internal & External Application Design Master Test Plan Integration Test Plan
Integration Test report
Output
Who
Methods
Developers White
Tools
Education
and Black Box techniques Problem / Configuration Management Debug Re-structure Code Analyzers Testing Methodology Effective use of tools
System Testing
Objectives
To verify that the system components perform control functions To perform inter-system test To demonstrate that the system performs both functionally and operationally as specified To perform appropriate types of tests relating to Transaction Flow, Installation, Reliability, Regression etc. After Integration Testing
When Input
Output
Detailed Requirements & External Application Design Master Test Plan System Test Plan System Test Report
Who
Development Team
and Users
Methods
Problem
/ Configuration Management
Recommended
Tools
set of tools
Education
To test the co-existence of products and applications that are required to perform together in the production-like operational environment (hardware, software, network) To ensure that the system functions together with all the components of its environment as a total system To ensure that the system releases can be deployed in the current environment
After system testing Often performed outside of project life-cycle Test Strategy Master Test Plan Systems Integration Test Plan Systems Integration Test report
Who Methods
Tools
Education
and Black Box techniques Problem / Configuration Management Recommended set of tools
Testing
Output
To verify that the system meets the user requirements After System Testing Business Needs & Detailed Requirements Master Test Plan User Acceptance Test Plan User Acceptance Test report
Who Methods
Management Tools Education Compare, keystroke capture & playback, regression testing
Testing Methodology Effective use of tools Product knowledge Business Release Strategy
Testing methodologies
Black box testing
White box testing Incremental testing
Thread testing
Black box testing No knowledge of internal design or code required. Tests are based on requirements and functionality White box testing Knowledge of the internal program design and code required. Tests are based on coverage of code statements,branches,paths,conditions.
missing functions Interface errors Errors in data structures or external database access Performance errors Initialization and termination errors
Based
Tests
Functional testing
Black box type testing geared to functional requirements of an application. Done by testers. Black box type testing that is based on overall requirements specifications; covering all combined parts of the system.
System testing
End-to-end testing
Similar to system testing; involves testing of a complete application environment in a situation that mimics real-world use.
Sanity testing
Initial effort to determine if a new software version is performing well enough to accept it for a major testing effort.
Regression testing
Acceptance testing
Load testing
Testing an application under heavy loads. Eg. Testing of a web site under a range of loads to determine, when the system response time degraded or fails.
Stress Testing
Testing under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database etc. Term often used interchangeably with load and performance testing.
Install/uninstall testing Testing of full,partial or upgrade install/uninstall process. Recovery testing Testing how well a system recovers from crashes, HW failures or other problems. Compatibility testing Testing how well software performs in a particular HW/SW/OS/NW environment.
Informal SW test that is not based on formal test plans or test cases; testers will be learning the SW in totality as they test it.
Comparison testing
Alpha testing Testing done when development is nearing completion; minor design changes may still be made as a result of such testing.
Beta-testing Testing when development and testing are essentially completed and final bugs and problems need to be found before release.
Mutation testing
To determining if a set of test data or test cases is useful, by deliberately introducing various bugs. Re-testing with the original test data/cases to determine if the bugs are detected.
All independent paths within a module have been exercised at least once Exercise all logical decisions on their true and false sides Execute all loops at their boundaries and within their operational bounds Exercise internal data structures to ensure their validity
Loop Testing
This white box technique focuses on the validity of loop constructs. 4 different classes of loops can be defined simple loops nested loops concatenated loops Unstructured loops
Incremental Testing
A
disciplined method of testing the interfaces between unit-tested programs as well as between system components. Involves adding unit-testing program module or component one by one, and testing each result and combination.
Top-down testing form the top of the module hierarchy and work down to the bottom. Modules are added in descending hierarchical order.
Bottom-up testing from the bottom of the hierarchy and works up to the top. Modules are added in ascending hierarchical order.
Testing Levels/ White Techniques Box Unit Testing Integration Testing System Testing Acceptance Testing
X
Black Box
X
X X X X
Determines
how.
Performance Test
Evaluate the compliance of a system or component with specified performance requirements. Often performed using an automated test tool to simulate large number of users.
Recovery Test
Confirms that the system recovers from expected or unexpected events without loss of data or functionality. Eg.
Conversion Test
Testing of code that is used to convert data from existing systems for use in the newly replaced systems
Usability Test
Testing the system for the users to learn and use the product.
Configuration Test
Examines an application's requirements for preexisting software, initial states and configuration in order to maintain proper functionality.
Requirements study
Test Execution
Test Closure Test Process Analysis
Requirements study
Testing
Cycle starts with the study of clients requirements. the requirements is very essential for testing the product.
Understanding of
Test Execution
Test Closure
TEST PLAN
Objectives
To
Assign
Estimate
Document testing
the
test items features to be tested testing tasks task allotment risks requiring contingency planning.
Validate the acceptability of a software product. Help the people outside the test group to understand why and how of product validation. A Test Plan should be thorough enough (Overall coverage of test to be conducted) useful and understandable by the people inside and outside the test group.
Scope The areas to be tested by the QA team. Specify the areas which are out of scope (screens, database, mainframe processes etc). Test Approach Details on how the testing is to be performed. Any specific strategy is to be followed for testing (including configuration management).
Entry Criteria
Various steps to be performed before the start of a test i.e. Pre-requisites. E.g.
Timely environment set up Starting the web server/app server Successful implementation of the latest build etc.
Resources
List of the people involved in the project and their designation etc.
Tasks/Responsibilities Tasks to be performed and responsibilities assigned to the various team members.
Exit Criteria Contains tasks like Bringing down the system / server Restoring system to pre-test environment Database refresh etc. Schedule / Milestones Deals with the final delivery date and the various milestones dates.
of PCs / servers required to install the application or perform the testing Specific software to get the application running or to connect to the database etc.
out the possible risks during testing Mitigation plans to implement incase the risk actually turns into a reality.
Tools to be used List the testing tools or utilities Eg.WinRunner, LoadRunner, Test Director, Rational Robot, QTP. Deliverables Various deliverables due to the client at various points of time i.e. Daily / weekly / start of the project end of the project etc. These include test plans, test procedures, test metric, status reports, test scripts etc.
References
Procedures Templates (Client specific or otherwise) Standards / Guidelines e.g. Qview Project related documents (RSD, ADD, FSD etc).
Annexure Links to documents which have been / will be used in the course of testing Eg. Templates used for reports, test cases etc. Referenced documents can also be attached here.
Sign-off Mutual agreement between the client and the QA Team. Both leads/managers signing their agreement on the Test Plan.
Reviewed early.
Specifies
Staff
Includes Based
Recognizes
TEST CASES
Test case is defined as A set of test inputs, execution conditions and expected results, developed for a particular objective. Documentation specifying inputs, predicted results and a set of execution conditions for a test item.
Specific
inputs that will be tried and the procedures that will be followed when the software tested. of one or more subtests executed as a sequence as the outcome and/or final state of one subtests is the input and/or initial state of the next.
the pretest state of the AUT and its environment, the test inputs or conditions. expected result specifies what the AUT should produce from the test inputs.
Sequence
Specifies
The
Reviewed early.
Specifies
Staff
Includes Based
Recognizes
Test Cases
Contents
Unambiguous
inspected.
Repeatable
and predictable.
Execution and Do
not mislead
Feasible
A defect is a variance from a desired product attribute. Two categories of defects are Variance from product specifications Variance from Customer/User expectations
specification by the user not in the built product, but something not specified has been included.
Defect categories
Wrong
Defect Log
Defect ID number Descriptive defect name and type Source of defect test case or other source Defect severity Defect Priority Defect status (e.g. New, open, fixed, closed, reopen, reject)
7.
8. 9.
10.
11. 12.
Date and time tracking for either the most recent status change, or for each change in the status. Detailed description, including the steps necessary to reproduce the defect. Component or program where defect was found Screen prints, logs, etc. that will aid the developer in resolution process. Stage of origination. Person assigned to research and/or corrects the defect.
Severity Vs Priority
Severity Factor that shows how bad the defect is and the impact it has on the product Priority Based upon input from users regarding which defects are most important to them, and be fixed first.
Severity Levels
Critical Major
installation process which does not load a component. missing menu option.
Security
permission required to access a function under test. not permit for further testing.
Functionality does
Runtime
Functionality Missed
out / Incorrect Implementation (Major Deviation from Requirements). (If specified by Client).
incompatibility and Operating systems incompatibility issues depending on the impact of error. Links.
Dead
system. The wrong field being updated. An updated operation that fails to complete. Performance Issues (If not specified by Client). Mandatory Validations for Mandatory Fields.
Deviation from Requirements). Images, Graphics missing which hinders functionality. Front End / Home Page Alignment issues. Severity Level Average / Medium Incorrect/missing hot key operation.
or ungrammatical text Inappropriate or incorrect formatting (such as text font, size, alignment, color, etc.) Screen Layout Issues Spelling Mistakes / Grammatical Mistakes Documentation Errors
Page
Titles Missing Alt Text for Images Background Color for the Pages other than Home page Default Value missing for the fields required Cursor Set Focus and Tab Flow on the Page Images, Graphics missing, which does not, hinders functionality
Test Reports
8 INTERIM REPORTS
Functional Testing
Status Functions Working Timeline Expected Vs Actual Defects Detected Timeline Defects Detected Vs Corrected Gap Timeline Average Age of Detected Defects by type Defect Distribution Relative Defect Distribution Testing Action
Report shows percentage of the functions that are Fully Tested Tested with Open defects Not Tested
Report
shows the actual plan to have all functions verses the current status of the functions working.
graph is an ideal format.
Line
between the number of defects being generated against the expected number of defects expected from the planning stage.
of defects uncovered verses the number of defects being corrected and accepted by the testing group.
Average
days of outstanding defects by its severity type or level. planning stage provides the acceptable open days by defect type.
The
Defect Distribution
Shows defect distribution by function or module and the number of tests completed.
the level of defects with the previous reports generated. Normalizing over the number of functions or lines of code shows a more accurate level of defects.
Testing Action
Report shows Possible shortfalls in testing Number of severity-1 defects Priority of defects Recurring defects Tests behind schedule .and other information that present an accurate testing picture
METRICS
2 Types
Product Process
metrics metrics
Process Metrics
Product Metrics
Test Metrics
User Participation = User Participation test time Vs. Total test time. Path Tested = Number of path tested Vs. Total number of paths.
Acceptance criteria tested = Acceptance criteria verified Vs. Total acceptance criteria.
Cost to locate defect = Test cost / No. of defects located in the testing.
Detected production defect = No. of defects detected in production / Application system size. Test Automation = Cost of manual test effort / Total test cost.
At
the time of crises, projects usually stop using all planned procedures and revert to coding and testing.
defined software engineering and management process for developing and maintaining software.
processes are put together to make a coherent whole.
These
and processes.
The
organizational measurement plan involves determining the productivity and quality for all important software process activities across all projects.
improvement Tools to identify weaknesses existing in their processes Make timely corrections
TESTING STANDARDS
External Standards Familiarity with and adoption of industry test standards from organizations. Internal Standards Development and enforcement of the test standards that testers must meet.
IEEE STANDARDS
Institute of Electrical and Electronics Engineers designed an entire set of standards for software and to be followed by the testers.
IEEE Standard for Software Quality Assurance Plan IEEE Standard for Software Configuration
Management Plan
Other standards..
ISO International Organization for Standards Six Sigma Zero Defect Orientation SPICE Software Process Improvement and Capability Determination NIST National Institute of Standards and Technology