White Paper Contest On Testing Concepts BY Lokesh Kumar Pachipala Nithya Natarajan
White Paper Contest On Testing Concepts BY Lokesh Kumar Pachipala Nithya Natarajan
White Paper Contest On Testing Concepts BY Lokesh Kumar Pachipala Nithya Natarajan
Software Testing
Software Testing
What is Software Testing
It is a process of evaluating a system by manual or automatic means and verifying that it satisfies specified requirements or identify differences between expected and actual results.
Software Testing
Why Software Testing
It is important as it cause impact on operational performance andreliability if not done properly. Effective software testing helps to deliver quality software products that satisfy users requirements, Needs and expectations. If done Poorly, it leads to high maintenance cost and user dissatisfaction
Software Testing
Principles of Software Testing
Testing is the process of executing a program with an intent of finding error and it should be planned long before testing begins. Test cases must be written for invalid and unexpected, as well as for valid and expected input conditions. A good test case is one that has high probability of detecting an error.
Software Testing
Software Testing
Quality Principles
What is Quality
Meeting the customers require -ment for the first time and every time. Quality is much more than the Absence of defects which allows us to meet customers expectations
Quality Principles
Quality Customers View Doing the right things. Doing the right way. Doing it right first time. Doing it on time.
Quality Principles
Quality Principles
Why Quality
It is most important factor affecting an organisations long term performance. Quality is the way to achieve Improved productivity and comptitiveness in any organisation.Quality saves, it does not cost.
Quality Principles
Quality Assurance
It is a planned and systematic set of activities necessary to provide adequate confidence that products and services will confirm to specified requirements and meet user needs.
Quality Principles
Quality Control
It is the process by which the product quality is compared with applicable Standards and the action taken when nonconformance is detected.
Software Process
The process that deals with the technical and management issues of software development It is a process specifies a method of developing software.
Software Process
Software Project
It is a software project in which a software process is used.Software Product It is the outcome of a software Project.
Plan
Action
Do / Execute
Check
SDLC
Requirement Analysis
The main objective of the requirements analysis is to produce a document that properly specifies all requirements of the customer. This is the primary output Of this phase. Many of the defects found in system and acceptance testing originate in requirements. Removing an error injected during requirements can cost As much as 100 times more during acceptance than if it is removed during the requirements phase itself.
SDLC
Design
It is a development process in this process the user requirements are elicited And software satisfying these requirements is designed, built, tested, and delivered to the customer. The development process is used when a new application is being developed Or a major enhancement is planned for an existing application. Several process models for software development exist. The most common ones Include the waterfall model, which organises the phases in a linear sequence
Design
It is the phase of the life cycle when a logical view of the computer implementation of the solution to the customer requirements is developed. It contains 2 major components: The functional architecture of the Application and the database design. Preparation of Test Plan is done.
Here the view of the application developed during the high-level design is broken down into modules and programs. Logic design is done for every program and then documented as program specifications. Unit Test Cases are prepared based on these documents.
Design
Coding
During this phase, the detailed design is used to produce the required programs in a programming language. This stage produces the source code, executables, and databases following the appropriate Coding standards. Unit Testing is started for the programs ready.
determine whether the system meets requirement specifications. determine whether the system meets business and user needs
Defect Categories
#Wrong #Extra #Missing
Testing Policy
Quality Policy
It is again a management definition of providing customer satisfaction for the first time and every time.
Testing Levels
Unit Testing
System Testing
UAT
IntegrationTesting
It is a testing in which the Individual Unit of the software are tested in Isolation from other parts of a program.
It refers to the testing in which software units of an application are combined and tested for a Communication interfaces between them.
Integration Testing
Big Bang
Top Down
Bottom Up
Top Down
In this approach, all the modules are added or combined from Higher level hierarchy to lower level hierarchy. I.e. the Higher module in isolation first, then the next set of lower level Modules are tested with the previously tested higher modules.
Stub: Special code segments that when invoked by a code segment under testing simulate the behaviour of designed and specified modules not yet constructed.
Bottom Up
In bottom up integration testing,all the modules are added or combined from lower Level hierarchy to higher level hierarchy i.e. the lower model is tested in isolation first, then the next set of higher level modules are tested with the previously tested lower Modules.
A
Big Bang
Module 1 Module 2
A type of integration testing in which Software components of an application are combined all at once into a overall system. According to this approach, every module is first unit tested in isolation from every module after that each module combined all at once and tested.
System Testing
Testing conducted on a complete, Integrated systems compliance with its specified requirements.
UAT
Testing conducted by client to evaluate the system compliance as per the business requirements
Testing Techniques
Equivalence Partitioning
This technique partitions the data to Equivalent sets. This technique optimises the testing required and Helps to avoid redundancy. Where a deposit rate is input, it may have a valid range of 0% to 15%. There is a +ve test, represented by a valid equivalent set:0 <= percentage <=15 There are two ve tests, represented by the two invalid equivalent sets:percentage < 0 percentage > 15
Equivalence Partitioning
Condition Nos.
Test case Descp. Data. Insert a negative value into the percentage field Insert a valid value into the percentage field Insert a value greater than the permitted range into the percentage field -1
Expected result
Field should not accept negative value Field should accept the value Field should accept the value
X2-1 X2-1-2
12
X2-1-3
16
Boundary Analysis
This technique ensures that minimum,borderline, and maximum data Values for a particular variable or equivalence class are taken into account
Boundary Analysis
Condition Nos.
Test Case Id
X2-1-1
Data.
-0.1
Expected result
Field should not accept < 0 Field should accept the value
X2-1
X2-1-2
X2-1-3
16
Field should accept the value Field should accept the value Field should not accept the value
X2-1-4 X2-1-5
15 15.1
Error Guessing
Based on the theory that test cases can be developed based upon the intuition and experience of the test engineer.
Sex Male Male Female Female Male Male Female Female Male Male Female Female
Married True False True False True False True False True False True False * * * *
Test case (1) 23 M T (2) 23 M F (3) 23 F T (4) 23 F F (5) 30 M T (6) 70 M F (7) 50 F T (8) 30 F F (5) (6) (7) (8) Impossible (8) (6) (7)
Integration Test
Unit Test
Code
Code Review
Objective
To test the internal logic and design of a program module.
Test Types
1. Conversion
Entry Criteria
Expected result
1. Unit test cases 100% executed. 2. Test results documented. 3. No severity Fatal or High problem outstanding. 4. Outstanding severity Minimum and low problems documented.
Unit Test
1. Program Spec reviewed 2. Error-handling and available. 3. Function 4. Regression 2. File/DB Design reviewed and available. 3. Code ready for Unit Test. 4. Unit test cases reviewed and ready. 5. Test data defined and ready.
Objective
To test the interface between program modules.
Test Types
1. Conversion 2. Error-handling 3. Function 4. Regression
Entry Criteria
1. Prg Spec reviewed and available. 2. File/DB Design reviewed and available. 3. Unit test executed for the related program modules. 4.Integration test cases reviewed and ready. 5. Test data defined and ready.
Expected result
1. Integration test conditions 100% executed. 2. Test results documented. 3. No severity Fatal and High problem outstanding. Outstanding severity medium and low problems documented.
Objective
To test the functional behaviour in application level, interfaces to other applications, technical aspects of system.
Test Types
1. Conversion. 2. Errorhandling. 3. Function 4. Interface. 5. Transaction flow.
Entry Criteria
1. Technical Spec reviewed and available. 2. Functional Spec reviewed and available. 3. Integration exit criteria met. 4. System test cases reviewed and ready. 5. Test environment ready.
Expected result
1. System test cases 100% executed. 2. No severity Fatal or high problem outstanding. 3. Test results documented. 4. Outstanding severity Medium and low problems documented and a plan is in place for fixing.
Testing Process
Test Plan
Test Plan
It consists of steps that define the overall process for conducting the tests. Table of contents of a test plan Might contain the following.
Test Scope
Test Design
Communication approach
Test Objective
Test Tools
Assumptions
Test Schedule
Risk Analysis
Test Environment
Test Scope It basically talks about two areas 1. What is covered in the test? 2. What is not covered in the test? (Basically with respect to functionalities).
Test Objective
It is nothing but setting a goal.It is a statement of what the tester is expected to accomplish or Validate during a specific test Activity. Example: Testing for this system should concentrate on validating that the requirements based on the test cases document.
Test Assumptions
These assumptions document test prerequisites, which if not met could have a negative impact on the test. Example: The support team will be available throughout the testing period in solving technical and functional Issues. Test team will inform 1 day in Advance if the support of development team is required during the weekends.
Test Design
The test design details what types of tests must be conducted, what stages of testing are required (e.g Unit, Integration, System, Performance and UAT)
Risk Analysis
This section deals with the risks and their possible impact on the test effort. Example: 1. Non-availability of testing resource. 2. Delay in environment readiness. 3. Any major change request raised during testing which calls for a testing. 4. Hardware issues. 5. Poor system performance.
Roles & Responsibilities This section defines who is responsible for each stage or type of testing and what is his/her role.
Test Schedule
This plan include major test activities and the start and end dates for the Same.
Test Environment
Environment requirements for each stage and type of testing should be outlined in this section of the plan. Example: Unit testing may be conducted in the development environment. While separate environments may be needed for Integration and system testing.
Communication Approach
Various communication mechanisms such as formal and informal meetings, defect tracking Mechanism etc.,
Test Tools
Any tools that will be needed to support the testing process should be included here.
Defect Tracking
Severity
From the producers viewpoint, a defect is a deviation from specifications, whether missing, wrong, or extra. From the customers viewpoint, a defect is any that causes customer dissatisfaction.
The severity of a defect should be assign objectively team based on pre-defined severity descriptions. Example: A severity one defect may be defined as one that causes data corruption, a system crash, security violations etc., In large projects, it may also be Necessary to assign a priority to the defect which determine the order in which defects should be fixed.
Bug Life Cycle The problem has been analyzed and determined that no fix is required. Fixed
The problem has been fixed in development environment and the fix is ready to be migrated to testing Environment.
Open
Closed
Waived Any further action to the problem is pending due to a justifiable reason. The deferral has to approved by the project manager.
FLC Field Level Checks FLV Field Level Validations FC Functional check Example
Employee
Pay slip
DDC
Note: Only valid entries are entered to check the DDC and DTC
1. 2. 3. 4.
Identify the flow of the application. Understand the flow. Identify the Scenarios. Break the Scenarios into Sub-Scenarios
Note: Only valid entries are entered to check the DDC and DTC
Example
Performance Testing
Stress
Load
Stress
The best way to capture the nature of Web site load is to identify and track, [e.g. using a log analyzer] a set of key user session variables that are applicable and relevant to your Web site traffic. Some of the variables that could be tracked include: The length of the session (measured in pages) The duration of the session (measured in minutes and seconds) The type of pages that were visited during the session (e.g., home page, product information page, credit card information page etc.) The typical/most popular flow or path through the website The % of browse vs. purchase sessions The % type of users (new user vs. returning registered user) Measure how many people visit the site per week/month or day. Then break down these current traffic patterns into one-hour time slices, and identify the peak-hours (i.e. if the user get lots of traffic during lunch time etc.), and the numbers of users during those peak hours. This information can then be used to estimate the number of concurrent users on the site.
Load
Performance Tests are tests that determine end-to-end timing (benchmarking) of various time critical business processes and transactions, while the system is under low load, but with a production sized database. This sets best possible performance expectation under a given configuration of infrastructure. It also highlights very early in the testing process if changes need to be made before load testing should be undertaken.
How to implement Performance Testing A key indicator of the quality of a performance test is repeatability. Re-executing a performance test multiple times should give the same set of results each time. If the results are not the same each time, then the differences in results from one run to the next cannot be attributed to changes in the application, configuration or environment
Testing Types
Alpha Testing: A customer conducts the Alpha testing at the developers site. The software is used in a natural setting with the developer recording errors and usage problems. Alpha tests are conducted in the controlled environment by the developer. Beta Testing: The beta testing is conducted at one or more customer sites by the end user(s) of the software. The developer will not be present in the customers place. So, the beta test is a live application of the software in an environment that cannot be controlled by a developer. The customer records all the problems (real or apparent) that are encountered during the beta testing and reports to the developer at regular interval. As a result of problems reported during beta test, the software developer makes the modifications and then prepares for release of the software product to the entire customer base.
Mapping: Tester at this point will have both the database and front-end screen shots. carefully data base should be mapped by understanding the entries made In front end (input) and values displayed in front end(output). The purpose of each field, screens and functionality should also be understood. The tester should arrive at clarity on the input and output of the application. In these cases, tester should use his discretion to decide the validations required at field, module and application level depending on the application purpose. Once these are done then the test team can start building test conditions for the application and from then on proceed with the normal test preparation style.
Test Execution Process: The preparation to test the application is now over. The test team should next plan the execution of the test on the application. Tests on the application are done on stages. The test execution takes place in three passes or sometimes four passes on the state of the application. They are: Pass 0 : This is done to check the health of the system before the start of the test process. This stage may not be applicable to most test process. Free form Testing will be adopted in this stage. Pass 1 or Comprehensive : All the test scripts developed for testing are executed. Some cases the application may not have certain module(s) ready for test, hence they will be covered comprehensively in the next pass. The testing here should not only cover all test cases but also business cycles as defined in the application.
Discrepancy or Pass 2: All test scripts that have resulted in a defect during the comprehensive pass should executed. In other words, all defects that have been fixed should be retested. Function points that may be affected by the defect should also be taken up for testing. Automated test scripts captured During the pass one are used here. This type of testing is called as Regression testing. Defects that are not fixed will be executed only after they are fixed. Sanity or Pass 3 : This is the final round in the test process. This is done either at the clients site or at Take depending on the strategy adopted. This is done in order to check if the system is sane enough for the next stage I.e. UAT or production as the case may be under a isolated environment. Ideally the defects that are fixed from the previous pass are checked and free from testing done to ensure integrity is conducted.
Testing Process
Test Planning
Test Execution
Test Results C
Baseline Documents Test Strategy/Test Plan Environ. and Control Reviews Enrichment Process
A B
B
Daily Reports 1. Test Problem Report 2. Test Summary Report 3. Downtime Log Final Reports 1. Test Problem Report 2. Traceability Matrix 3. Functionalities not tested.
1. Business Requirement 2. RFD(s) and BD(s) 3. Design Specification 4. E mails 5. Minutes of meeting
1. Prepare Test Plan 2. High Level Conditions 3. Prepare Test Cases 4. Prepare Test Data 5. Setup Test Environment 6. Setup Test Bed 7. Receive Executable 8. Executable Pass 1 Comprehensive Testing 9. Prepare T.P.R. 10. Release Executable for fixing. 11. Execute Pass II Discrepancy Pass.
Severity Vs Cause Dependency on Customer/ End User. Fatal Inadequate Tools High Lack of Standards Lack of Skills Lack of Training Oversight
Medium
Low
White Paper Contest On Testing Concepts BY Lokesh Kumar Pachipala Nithya Natarajan