Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 7

Type of Testing

1. Monkey Testing: The Coverage of basic Functionalities during testing is called “Monkey Testing”. It is also
known as “Chimpongy Testing”. This type of testing applies by testing team due to lack of time.

2. Exploratory testing: Level by level of activities coverage during testing is called “Exploratory Testing”.

3. Sanity Testing: - Weather development team Release build is stable for complete testing or not? This type of
check is called “Sanity Testing” or “Tester Acceptance Testing” or “Build Verification Testing”.

4. Smoke Testing: - A small shakeup in “Sanity Testing” process. During smoke testing tester try to analyze reason
when that project is not working before start a testing.

5. Bigbang Testing: - A single storage of testing process after completion of entire coding is called “Bigbang
Testing”. It is also known as “Informal Testing”.

6. Incremental Testing: - A step-by-step testing process from unit level to system level is called “Incremental
Testing”. This testing is also known as “Formal Testing”.

7. Mutation Testing: - Mutation means that a small change in coding. This type of technique used by white box
tester to apply on tested programs to estimate completeness and correctness of the testing (Coding level testing
under White Box Testing).

8. Static and Dynamic Testing: -

 A test conduct on a program without execution is called a “Static Testing”.


 A test conduct on a program through execution is called a “Dynamic Testing”.

9. Manual Vs Automation: -

A tester conduct a test on build without using any third party tool help is called a “Manual Testing”.

A tester conduct a test on a build through software testing tool help is called “Test Automation”.

10. Retesting: - Re-execution of a test in the application build with multiple test data is called
“Retesting”.

11. Regression Testing: Re execution of the tests on modified build to ensure Bug Fix Work and Possibility of side
effects is called “Regression”.

12. Adhoc Testing : To test without any plan to understand the functionality of the application build is called
“Adhoc Testing".
Verification: - Weather the system is Right or Wrong? This type of checking is called
Verification.

Validation: - Weather the system is Right or Not? Is called Validation


(w. r. t. Customer Requirements).

Software Development Life Cycle: - The process used to create a software product from its initial conception to
its public release is known as the “Software Development Lifecycle”.

Information Gathering

Analysis of that Information

Design

Coding

Testing

Maintenance

Lifecycle Design Analysis Design Coding Maintenance

System
Testing

Information
Gathering
(BRS) Black Box
Testing

Lifecycle Testing Reviews Review Prototype White Box Testing Test Software
Changes

<----------------- Verification -------------------- > < ------ Validation ---- >


Refinement of V-model: - V-model is expensive for small & medium scale
Organizations. Due to this region some of the organizations are following the Refinement
of V – model.

BRS/CRS/URS User Acceptance Testing

Reviews
S/wRs Functional & System Testing

HLD Integration Testing

Reviews
LLD Unit Testing / Micro Testing

Coding

Integration Testing:

After completion of “Unit level Testing”, Developers are combining those modules as a system with respect to
HLD’s. During this composition of modules, they concentrate on integration testing to verify coupling of that modules.
There are 3 approaches to conduct integration testing

1. Top-Down Approach: - Tester testing on main module without coming some of the sub modules is called
“Top-Down Approach”. In this scenario White Box Tester use a Temporary program instead of under
constructive sub modules called “Strub”.

Strub -> A Strub is a called program by Main program.

2. Bottom-Up Approach: - A White Box Tester conducts a test on sub modules without coming from main
module. In this scenario white box tester use a temporary program instead of under constructive main module
called “Driver”.

Driver -> A Driver is a calling program to call sub modules.

3. Sandwitch Approach: - It’s a combination of Top-Down and Bottom-Up Approaches. In this scenario White
Box Tester uses a Driver and a Strub instead of under constructed modules.
Test Strategy :
Before starting any testing activities, the team lead will have to think a lot & arrive at a strategy. This will describe
the approach, which is to be adopted for carrying out test activities including the planning activities. This is a formal
document and the very first document regarding the testing area and is prepared at a very early stag in SDLC. This
document must provide generic test approach as well as specific details regarding the project. The following areas
are addressed in the test strategy document.

Test Plan
The test strategy identifies multiple test levels, which are going to be performed for the project. Activities at each
level must be planned well in advance and it has to be formally documented. Based on the individual plans only,
the individual test levels are carried out.
The plans are to be prepared by experienced people only. In all test plans, the ETVX {Entry-Task-Validation-Exit}
criteria are to be mentioned. Entry means the entry point to that phase. For example, for unit testing, the coding
must be complete and then only one can start unit testing. Task is the activity that is performed. Validation is the
way in which the progress and correctness and compliance are verified for that phase. Exit tells the completion
criteria of that phase, after the validation is done. For example, the exit criterion for unit testing is all unit test cases
must pass.

What’s a test case?


“A test case specifies the pretest state of the IUT and its environment, the test inputs or conditions, and the
expected result. The expected result specifies what the IUT should produce from the test inputs. This
specification includes messages generated by the IUT, exceptions, returned values, and resultant state of
the IUT and its environment. Test cases may also specify initial and resulting conditions for other objects
that constitute the IUT and its environment.”
What’s a scenario?
A scenario is a hypothetical story, used to help a person think through a complex problem or system.
The test cases will have a generic format as below.

• Test case ID - The test case id must be unique across the application

• Test case description - The test case description must be very brief.

• Test prerequisite - The test pre-requisite clearly describes what should be present in the system,
before the test can be executes.

• Test Inputs - The test input is nothing but the test data that is prepared to be fed to the system.

• Test steps - The test steps are the step-by-step instructions on how to carry out the test.

• Expected Results - The expected results are the ones that say what the system must give as output or
how the system must react based on the test steps.

• Actual Results – The actual results are the ones that say outputs of the action for the given inputs or
how the system reacts for the given inputs.
• Pass/Fail - If the Expected and Actual results are same then test is Pass otherwise Fail.

The test cases are classified into positive and negative test cases. Positive test cases are designed to prove that the
system accepts the valid inputs and then process them correctly.

What is a Defect?
• Any deviation from specification
• Anything that causes user dissatisfaction
• Incorrect output
• Software does not do what it intended to do.

Bug / Defect / Error: -

• Software is said to have bug if it features deviates from specifications.

• Software is said to have defect if it has unwanted side effects.

• Software is said to have Error if it gives incorrect output.

Life Cycle of a Defect


The following self explanatory figure explains the life cycle of a defect:

Defer Assign Fix/Change

Submit Defect Review, Verify Validate


and Qualify

Duplicate,
Reject or More Close
Info

Update Defect

priority: The level of (business) importance assigned to an item, e.g. defect.


severity: The degree of impact that a defect has on the development or operation of a
component or system.

. how do the companies expect the defect reporting to be communicated by the tester to the development team.
Can the excel sheet template be used for defect reporting. If so what are the common fields that are to be
included ? who assigns the priority and severity of the defect
To report bugs in excel:
Sno. Module Screen/ Section Issue detail Severity
Prioriety Issuestatus

this is how to report bugs in excel sheet and also set filters on the Columns attributes.
But most of the companies use the share point process of reporting bugs In this when the project came for testing a
module wise detail of project is inserted to the defect managment system they are using. It contains following field
1. Date
2. Issue brief
3. Issue discription(used for developer to regenrate the issue)
4. Issue satus( active, resolved, onhold, suspend and not able to regenrate)
5. Assign to (Names of members allocated to project)
6. Prioriety(High, medium and low)
7. severity (Major, medium and low)

124. What's the difference between load and stress testing ?


One of the most common, but unfortunate misuse of terminology is treating “load testing” and “stress testing” as
synonymous. The consequence of this ignorant semantic abuse is usually that the system is neither properly “load tested”
nor subjected to a meaningful stress test.
Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, mips,
interrupts, etc.) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that
will make that break potentially harmful. The system is not expected to process the overload without adequate resources,
but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). Bugs and failure modes discovered under
stress testing may or may not be repaired depending on the application, the failure mode, consequences, etc. The load
(incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource
depletion.
Load testing is subjecting a system to a statistically representative (usually) load. The two main reasons for using such
loads is in support of software reliability testing and in performance testing. The term 'load testing' by itself is too vague
and imprecise to warrant use. For example, do you mean representative load,' 'overload,' 'high load,' etc. In performance
testing, load is varied from a minimum (zero) to the maximum level the system can sustain without running out of
resources or having, transactions >suffer (application-specific) excessive delay.
A third use of the term is as a test whose objective is to determine the maximum sustainable load the system can handle.
In this usage, 'load testing' is merely testing at the highest transaction arrival rate in performance testing.

142. What can be done if requirements are changing continuously?


A common problem and a major headache.
- Work with the project's stakeholders early on to understand how requirements might change so that alternate test plans
and strategies can be worked out in advance, if possible.
- It's helpful if the application's initial design allows for some adaptability so that later changes do not require redoing the
application from scratch.
- If the code is well-commented and well-documented this makes changes easier for the developers.
- Use rapid prototyping whenever possible to help customers feel sure of their requirements and minimize changes.
- The project's initial schedule should allow for some extra time commensurate with the possibility of changes.
- Try to move new requirements to a 'Phase 2' version of an application, while using the original requirements for the
'Phase 1' version.
- Negotiate to allow only easily-implemented new requirements into the project, while moving more difficult new
requirements into future versions of the application.
- Be sure that customers and management understand the scheduling impacts, inherent risks, and costs of significant
requirements changes. Then let management or the customers (not the developers or testers) decide if the changes are
warranted - after all, that's their job.
- Balance the effort put into setting up automated testing with the expected effort required to re-do them to deal with
changes.
- Try to design some flexibility into automated test scripts.
- Focus initial automated testing on application aspects that are most likely to remain unchanged.
- Devote appropriate effort to risk analysis of changes to minimize regression testing needs.
- Design some flexibility into test cases (this is not easily done; the best bet might be to minimize the detail in the test
cases, or set up only higher-level generic-type test plans)
- Focus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added risk that
this entails).

You might also like