Testing Interview Questions
Testing Interview Questions
Testing Interview Questions
What is the difference between System testing and end to end testing?
System testing is testing conducted on a complete, integrated system to evaluate the system’s
compliance with its specified requirements.
end to end testing: here we give real time data…
System Testing: One module or part of the system is tested
end-to-end Testing: The entire project, i.e which has many integrated systems, are tested.
How to write test cases for a search web page like google?
Test Case No.1
Test Data: Enter the URL of the website and press “Enter button”
Exp Result: Home page of the Website should appears.
Test Case No.2
Test Data: Check all the sub links are enable or disable.
Exp Result: All the sub links must be in enabled state.
Test Case No.3
Test Data: Check whether the Search Button is Enabled or disabled.
Exp Result: Search Button should be in Enabled state.
Test Case No.2
Test Data: Check whether the “search” Text box takes all data or not.
Exp Result: It should take all types of data (like, Numeric,Characters,special Characters etc…).
What is the difference between use case, test case and test plan?
Use Case:
How can we design the test cases from requirements? Do the requirements, represent
exact functionality of AUT?
Yes, requirements should represent exact functionality of AUT.
First of all you have to analyze the requirements very thoroughly in terms of functionality. Then we
have to thing about suitable test case design technique [Black Box design techniques like
Equivalence Class Partitioning (ECP), Boundary Valve Analysis (BVA),Error guessing and Cause Effect
Graphing] for writing the test cases.
By these concepts you should design a test case, which should have the capability of finding the
absence of defects.
How to launch the test cases in Test Director and where it is saved?
You create the test cases in the test plan tab and link them to the requirements in the requirement
tab. Once the test cases are ready we change the status to ready and go to the “Test Lab” Tab and
create a test set and add the test cases to the test set and you can run from there.
For automation, in test plan, create a new automated test and launch the tool and create the script
and save it and you can run from the test lab the same way as you did for the manual test cases.
The test cases are sorted in test plan tab or more precisely in the test director, lets say quality
centers database test director is now referred to as quality center.
How can we explain a bug, which may arrive at the time of testing. Explain?
First check the status of the bug, then check whether the bug is valid or not then forward the same
bug to the team leader and then after confirmation forward it to the concern developer.
What do you mean by reproducing the bug? If the bug was not reproducible, what is the
next step?
Reproducing a bug is as simple as reproducing a defect. If you find a defect, for example click the
button and the corresponding action didn’t happen, it is a bug. If the developer is unable to find this
behavior he will ask us to reproduce the bug.
In another scenario, if the client complaints a defect in the production we will have to reproduce it in
test environment.
On what basis we give priority and severity for a bug and give one example for high
priority and low severity and high severity and low priority?
Always the priority is given by our team leader. Tester will never give the priority.
For example,
High severity: hardware bugs application crash
Low severity: User interface bugs.
High priority: Error message is not coming on time, calculation bugs etc.
Low priority: Wrong alignment, final output wrong.
How is traceability of bug follow?
The traceability of bug can be followed in so many ways.
1. Mapping the functional requirement scenarios (FS Doc) - test cases (ID) - Failed test cases (Bugs)
2. Mapping between requirements (RS Doc) - Test case (ID) - Failed test cases.
3. Mapping between test plans (TP Doc) - test case (ID) - failed test cases.
4. Mapping between business requirements (BR Doc) - test cases (ID) - Failed test cases.
5. Mapping between high level design (Design Doc) - test cases (ID) - Failed test cases.
Usually the traceability matrix is mapping between the requirements, client requirements, function
specification, test plan and test cases.
How many functional testing tools are available? What is the easiest scripting language
used?
There might be a lot but mostly used are Win runner , silk Test, Rational Robot, QTP These are the
functional Testing tools.
What is the major difference between Web services & client server environment?
The major difference between them are:
Web Services:
Its more towards the internet side. When we talk about web services it could mean from the java side
(deployed on Apache) or Windows side (deployed on IIS). Testing web services is totally a different
topic here.
Client Server: The system here involves a client system or a GUI (wherein a user see the front end
by which he can input to the system) and a Server ( a backend usually) where in the data gets
saved via the GUI.
Testcase is different perceptions for a functionality to be tested, usually written by a Test Engineer.
The same person who has written the testcase may execute them or the other person
Above Usecase is converted into TestCase keeping in mind different perceptions (-ve and +ve)
Action Expected Value Actual Value Result
click on Ok screen 1 should appear(+ve perception) screen1 appeared pass
click on ok screen 2 should appear(-ve perception) screen 1 appeared fail
click on ok screen 2 should appear(-ve perception screen 2 appeared pass
Difference between test case and use case is
use case is prepared by High Level Management team but test case is prepared by
Test engineers.
Use case is prepared for validating the applications in terms of Actors, actions and responses but
test case is used to test a specific functionality of an app.
Use case is description of series of events that occur between user and system.
i.e. how system will response when a particular action is taken by user.
Test case:-in use case a generalized scenario is given but in testcase we are more thoroughly testing
the various aspect of that generalized scenario.
What is the difference between System testing and end to end testing?
System testing is testing conducted on a complete, integrated system to evaluate the system’s
compliance with its specified requirements.
end to end testing: here we give real time data…
System Testing: One module or part of the system is tested
end-to-end Testing: The entire project, i.e which has many integrated systems, are tested.
o Black box testing - not based on any knowledge of internal design or code. Tests are
based on requirements and functionality.
o White box testing - based on knowledge of the internal logic of an application's code.
Tests are based on coverage of code statements, branches, paths, conditions.
o unit testing - the most 'micro' scale of testing; to test particular functions or code
modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of
the internal program design and code. Not always easily done unless the application has a well-
designed architecture with tight code; may require developing test driver modules or test
harnesses.
o incremental integration testing - continuous testing of an application as new
functionality is added; requires that various aspects of an application's functionality be independent
enough to work separately before all parts of the program are completed, or that test drivers be
developed as needed; done by programmers or by testers.
o integration testing - testing of combined parts of an application to determine if they
function together correctly. The 'parts' can be code modules, individual applications, client and
server applications on a network, etc. This type of testing is especially relevant to client/server and
distributed systems.
o functional testing - black-box type testing geared to functional requirements of an
application; this type of testing should be done by testers. This doesn't mean that the programmers
shouldn't check that their code works before releasing it (which of course applies to any stage of
testing.)
o system testing - black-box type testing that is based on overall requirements
specifications; covers all combined parts of a system.
o end-to-end testing - similar to system testing; the 'macro' end of the test scale;
involves testing of a complete application environment in a situation that mimics real-world use,
such as interacting with a database, using network communications, or interacting with other
hardware, applications, or systems if appropriate.
o sanity testing or smoke testing - typically an initial testing effort to determine if a new
software version is performing well enough to accept it for a major testing effort. For example, if
the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or
corrupting databases, the software may not be in a 'sane' enough condition to warrant further
testing in its current state.
o regression testing - re-testing after fixes or modifications of the software or its
environment. It can be difficult to determine how much re-testing is needed, especially near the
end of the development cycle. Automated testing tools can be especially useful for this type of
testing.
o acceptance testing - final testing based on specifications of the end-user or customer,
or based on use by end-users/customers over some limited period of time.
o load testing - testing an application under heavy loads, such as testing of a web site
under a range of loads to determine at what point the system's response time degrades or fails.
o stress testing - term often used interchangeably with 'load' and 'performance' testing.
Also used to describe such tests as system functional testing while under unusually heavy loads,
heavy repetition of certain actions or inputs, input of large numerical values, large complex queries
to a database system, etc.
o performance testing - term often used interchangeably with 'stress' and 'load' testing.
Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements
documentation or QA or Test Plans.
o usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will
depend on the targeted end-user or customer. User interviews, surveys, video recording of user
sessions, and other techniques can be used. Programmers and testers are usually not appropriate
as usability testers.
o install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
o recovery testing - testing how well a system recovers from crashes, hardware failures,
or other catastrophic problems.
o failover testing - typically used interchangeably with 'recovery testing'
o security testing - testing how well the system protects against unauthorized internal or
external access, willful damage, etc; may require sophisticated testing techniques.
o compatability testing - testing how well software performs in a particular
hardware/software/operating system/network/etc. environment.
o exploratory testing - often taken to mean a creative, informal software test that is not
based on formal test plans or test cases; testers may be learning the software as they test it.
o ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers
have significant understanding of the software before testing it.
o context-driven testing - testing driven by an understanding of the environment, culture,
and intended use of software. For example, the testing approach for life-critical medical equipment
software would be completely different than that for a low-cost computer game.
o user acceptance testing - determining if software is satisfactory to an end-user or
customer.
o comparison testing - comparing software weaknesses and strengths to competing
products.
o alpha testing - testing of an application when development is nearing completion; minor
design changes may still be made as a result of such testing. Typically done by end-users or others,
not by programmers or testers.
o beta testing - testing when development and testing are essentially completed and final
bugs and problems need to be found before final release. Typically done by end-users or others,
not by programmers or testers.
o mutation testing - a method for determining if a set of test data or test cases is useful,
by deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requires large
computational resources.
o poor requirements - if requirements are unclear, incomplete, too general, and not
testable, there will be problems.
o unrealistic schedule - if too much work is crammed in too little time, problems are
inevitable.
o inadequate testing - no one will know whether or not the program is any good until the
customer complains or systems crash.
o featuritis - requests to pile on new features after development is underway; extremely
common.
o miscommunication - if developers don't know what's needed or customer's have
erroneous expectations, problems are guaranteed.
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements
and/or expectations, and is maintainable.
However, quality is obviously a subjective term. It will depend on who the 'customer' is and their
overall influence in the scheme of things. A wide-angle view of the 'customers' of a software
development project might include end-users, customer acceptance testers, customer contract officers,
customer management, the development organization's management/accountants/testers/salespeople,
future software maintenance engineers, stockholders, magazine columnists, etc. Each type of
'customer' will have their own slant on 'quality' - the accounting department might define quality in
terms of profits while an end-user might define quality as user-friendly and bug-free.
'Design' could refer to many things, but often refers to 'functional design' or 'internal design'. Good
internal design is indicated by software code whose overall structure is clear, understandable, easily
modifiable, and maintainable; is robust with sufficient error-handling and status logging capability; and
works correctly when implemented. Good functional design is indicated by an application whose
functionality can be traced back to customer and end-user requirements.
For programs that have a user interface, it's often a good idea to assume that the end user will have
little computer knowledge and may not read a user manual or even the on-line help; some common
rules-of-thumb include:
o the program should act in a way that least surprises the user
o it should always be evident to the user what can be done next and how to exit
o the program shouldn't let the users do something stupid without warning them.
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by individuals to successfully complete projects. Few if
any processes in place; successes may not be repeatable. Level 2 - software project tracking, requirements management, realistic
planning, and configuration management processes are in place; successful practices can be repeated. Level 3 - standard software
development and maintenance processes are integrated throughout an organization; a Software Engineering Process Group is is in
place to oversee software processes, and training programs are used to ensure understanding and compliance. Level 4 - metrics are
used to track productivity, processes, and products. Project performance is predictable, and quality is consistently high. Level 5 - the
focus is on continouous process improvement. The impact of new processes and technologies can be predicted and effectively
implemented when required. Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed. Of those, 27%
were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For ratings during the period 1992-96, 62% were at Level 1, 23%
at 2, 13% at 3, 2% at 4, and 0.4% at 5.) The median size of organizations was 100 software engineering/maintenance personnel;
32% of organizations were U.S. federal contractors or agencies. For those rated at Level 1, the most problematical key process area
was in Software Quality Assurance.
The life cycle begins when an application is first conceived and ends when it is no longer in use. It
includes aspects such as initial concept, requirements analysis, functional design, internal design,
documentation planning, test planning, coding, document preparation, integration, testing,
maintenance, updates, retesting, phase-out, and other aspects.
code analyzers - monitor code complexity, adherence to standards, etc. coverage analyzers - these
tools check which parts of the code have been exercised by a test, and may be oriented to code
statement coverage, condition coverage, path coverage, etc. memory analyzers - such as bounds-
checkers and leak detectors. load/performance test tools - for testing client/server and web
applications under various load levels. web test tools - to check that links are valid, HTML code
usage is correct, client-side and server-side programs work, a web site's interactions are secure.
other tools - for test case management, documentation management, bug reporting, and
configuration management.
What is Acceptance Testing?
Testing conducted to enable a user/customer to determine whether to accept a software product.
Normally performed to validate the software meets a set of agreed acceptance criteria.
What is Baseline?
The point at which some deliverable produced during the software engineering process is put under
formal change control.
What is Bug?
A fault in a program which causes the program to perform in an unintended or unanticipated manner.
What is CMM?
The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the
software processes of an organization and for identifying the key practices that are required to increase
the maturity of these processes.
What is Coding?
The generation of source code.
What is Debugging?
The process of finding and removing the causes of software failures.
What is Defect?
Nonconformance to requirements or functional / program specification
What is Emulator?
A device, computer program, or system that accepts the same inputs and produces the same outputs as
a given system.
What is Endurance Testing?
Checks for memory leaks or other problems that may occur with prolonged execution.
What is Inspection?
A group review quality improvement process for written material. It consists of two aspects; product
(document itself) improvement and process improvement (of both document production and
inspection).
What is Metric?
A standard of measurement. Software metrics are the statistics describing the structure or content of a
program. A metric should be a real objective measurement of something such as number of bugs per
lines of code.
What is Testability?
The degree to which a system or component facilitates the establishment of test criteria and the
performance of tests to determine whether those criteria have been met.
What is Testing?
The process of exercising software to verify that it satisfies specified requirements and to detect
errors.
The process of analyzing a software item to detect the differences between existing and required
conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).
The process of operating a system or component under specified conditions, observing or recording the
results, and making an evaluation of some aspect of the system or component.
What is Test Automation? It is the same as Automated Testing.
How do you go about going into a new organization? How do you assimilate?
Define the following and explain their usefulness: Change Management, Configuration Management,
Version Control, and Defect Tracking.
What is Validation?
The process of evaluating software at the end of the software development process to ensure
compliance with software requirements. The techniques for validation is testing, inspection and
reviewing
What is Verification?
The process of determining whether of not the products of a given phase of the software development
cycle meet the implementation steps and can be traced to the incoming objectives established during
the previous phase. The techniques for verification are testing, inspection and reviewing.
What is Walkthrough?
A review of requirements, designs or code characterized by the author of the material under review
guiding the progression of the review.
The best bet in this situation is for the testers to go through the
process of reporting whatever bugs or blocking-type problems initially
show up, with the focus being on critical bugs. Since this type of
problem can severely affect schedules, and indicates deeper problems
in the software development process (such as insufficient unit
testing or insufficient integration testing, poor design, improper
build or release procedures, etc.) managers should be notified, and
provided with some documentation as evidence of the problem.
14. What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project.
However, if extensive testing is still not justified, risk analysis is
again needed and the same considerations as described previously in.
The tester might then do ad hoc testing, or write up a limited test
plan based on the risk analysis.
functional requirement: A requirement that specifies a function that a component or system must
perform.
functional testing: Testing based on an analysis of the specification of the functionality of a component
or system.
functionality: The capabilty of the software product to provide functions which meet stated and implied
needs when the software is used under specified conditions.
functionality testing: The process of testing to determine the functionality of a software product.