Manual Software Testing Interview Questions With Answers
Manual Software Testing Interview Questions With Answers
Answers
As a software tester the person should have certain qualities, which are imperative. The
person should be observant, creative, innovative, speculative, patient, etc. It is important to
note, that when you opt for manual testing, it is an accepted fact that the job is going to be
tedious and laborious. Whether you are a fresher or experienced, there are certain
questions, to which answers you should know.
7) What are the check lists, which a software tester should follow?
Read the link on check lists for software tester to find the answer to the question.
One of the software testing types, where tests are conducted to test interfaces between
components, interactions of the different parts of the system with operating system, file
system, hardware and between different software. It may be carried out by the integrator of
the system, but should ideally be carried out by a specific integration tester or a test team.
33) What is the difference between volume testing and load testing?
Volume testing checks if the system can actually come up with the large amount of data.
For example, a number of fields in a particular record or numerous records in a file, etc. On
the other hand, load testing is measuring the behavior of a component or a system with
increased load. The increase in load can be in terms of number of parallel users and/or
parallel transactions. This helps to determine the amount of load, which can be handled by
the component or the software system.
the results of the tests have to be recorded. The test cases which pass are marked
accordingly. If the test cases fail, defects have to be raised. When the defects are fixed the
failed test case has to be executed again.
customer of testing and emphasizing a test-first design paradigm. See also Test
Driven Development.
45) What is Application Binary Interface (ABI)?
A specification defining requirements for portability of applications in binary forms
across different system platforms and environments.
46) What is Application Programming Interface (API)?
A formalized set of software calls and routines that can be referenced by an
application program in order to access supporting system or network services.
47) What is Automated Software Quality (ASQ)?
The use of software tools, such as automated testing tools, to improve software
quality.
48) What is Automated Testing?
Testing employing software tools which execute tests without manual intervention.
Can be applied in GUI, performance, API, etc. testing. The use of software to
control the execution of tests, the comparison of actual outcomes to predicted
outcomes, the setting up of test preconditions, and other test control and test
reporting functions.
49) What is Backus-Naur Form?
A metalanguage used to formally describe the syntax of a language.
50 )What is Basic Block?
A sequence of one or more consecutive, executable statements containing no
branches.
51) What is Basis Path Testing?
A white box test case design technique that uses the algorithmic flow of the
program to design tests.
unit-level and regression bugs during development. Practitioners of TDD write a lot
of tests, i.e. an equal number of lines of test code to the size of the production
code.
103. What is Test Driver?
A program or test tool used to execute a tests. Also known as a Test Harness.
104. What is Test Environment?
The hardware and software environment in which tests will be run, and any other
software with which the software under test interacts when under test including
stubs and test drivers.
105. What is Test First Design?
Test-first design is one of the mandatory practices of Extreme Programming (XP).It
requires that programmers do not write any production code until they have first
written a unit test.
106. What is Test Harness?
A program or test tool used to execute a tests. Also known as a Test Driver.
107. What is Test Plan?
A document describing the scope, approach, resources, and schedule of intended
testing activities. It identifies test items, the features to be tested, the testing
tasks, who will do each task, and any risks requiring contingency planning.
108. What is Test Procedure?
A document providing detailed instructions for the execution of one or more test
cases.
109. What is Test Script?
Commonly used to refer to the instructions for a particular test that will be carried
out by an automated test tool.
110. What is Test Specification?
A document specifying the test approach for a software feature or combination or
features and the inputs, predicted results and execution conditions for the
associated tests.
111. What is Test Suite?
A collection of tests used to validate the behavior of a product. The scope of a Test
Suite varies from organization to organization. There may be several Test Suites
for a particular product for example. In most cases however a Test Suite is a high
level concept, grouping together hundreds or thousands of tests related by what
they are intended to test.
112. What is Test Tools?
Computer programs used in the testing of a system, a component of the system,
or its documentation.
adhere to, but everyone has different ideas about what's best, or what is too many
or too few rules. There are also various theories and metrics, such as McCabe
Complexity metrics. It should be kept in mind that excessive use of standards and
rules can stifle productivity and creativity. 'Peer reviews', 'buddy checks' code
analysis tools, etc. can be used to check for problems and enforce standards. For C
and C++ coding, here are some typical ideas to consider in setting
rules/standards; these may or may not apply to a particular situation: - minimize
or eliminate use of global variables. - use descriptive function and method names use both upper and lower case, avoid abbreviations, use as many characters as
necessary to be adequately descriptive (use of more than 20 characters is not out
of line); be consistent in naming conventions. - use descriptive variable names use both upper and lower case, avoid abbreviations, use as many characters as
necessary to be adequately descriptive (use of more than 20 characters is not out
of line); be consistent in naming conventions. - function and method sizes should
be minimized; less than 100 lines of code is good, less than 50 lines is preferable.
- function descriptions should be clearly spelled out in comments preceding a
function's code.- organize code for readability. - use whitespace generously vertically and horizontally - each line of code should contain 70 characters max. one code statement per line. - coding style should be consistent throughout a
program (eg, use of brackets, indentations, naming conventions, etc.) - in adding
comments, err on the side of too many rather than too few comments; a common
rule of thumb is that there should be at least as many lines of comments (including
header blocks) as lines of code. - no matter how small, an application should
include documentation of the overall program function and flow (even a few
paragraphs is better than nothing); or if possible a separate flow chart and
detailed program documentation. - make extensive use of error handling
procedures and status and error logging. - for C++, to minimize complexity and
increase maintainability, avoid too many levels of inheritance in class hierarchies
(relative to the size and complexity of the application). Minimize use of multiple
inheritance, and minimize use of operator overloading (note that the Java
programming language eliminates multiple inheritance and operator overloading.)
- for C++, keep class methods small, less than 50 lines of code per method is
preferable. - for C++, make liberal use of exception handlers
131. What is 'good design'?
'Design' could refer to many things, but often refers to 'functional design' or
'internal design'. Good internal design is indicated by software code whose overall
structure is clear, understandable, easily modifiable, and maintainable; is robust
with sufficient error-handling and status logging capability; and works correctly
when implemented. Good functional design is indicated by an application whose
functionality can be traced back to customer and end-user requirements. For
programs that have a user interface, it's often a good idea to assume that the end
user will have little computer knowledge and may not read a user manual or even
the on-line help; some common rules-of-thumb include: - the program should act
in a way that least surprises the user - it should always be evident to the user
what can be done next and how to exit - the program shouldn't let the users do
something stupid without warning them.
application's initial design allows for some adaptability so that later changes do not
require redoing the application from scratch. - If the code is well-commented and
well-documented this makes changes easier for the developers.- Use rapid
prototyping whenever possible to help customers feel sure of their requirements
and minimize changes. - The project's initial schedule should allow for some extra
time commensurate with the possibility of changes.- Try to move new
requirements to a 'Phase 2' version of an application, while using the original
requirements for the 'Phase 1' version. - Negotiate to allow only easilyimplemented new requirements into the project, while moving more difficult new
requirements into future versions of the application. - Be sure that customers and
management understand the scheduling impacts, inherent risks, and costs of
significant requirements changes. Then let management or the customers (not the
developers or testers) decide if the changes are warranted - after all, that's their
job. - Balance the effort put into setting up automated testing with the expected
effort required to re-do them to deal with changes. - Try to design some flexibility
into automated test scripts. - Focus initial automated testing on application aspects
that are most likely to remain unchanged. - Devote appropriate effort to risk
analysis of changes to minimize regression testing needs. - Design some flexibility
into test cases (this is not easily done; the best bet might be to minimize the detail
in the test cases, or set up only higher-level generic-type test plans) - Focus less
on detailed test plans and test cases and more on ad hoc testing (with an
understanding of the added risk that this entails).
143. What if the project isn't big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if
extensive testing is still not justified, risk analysis is again needed and the same
considerations as described previously in 'What if there isn't enough time for
thorough testing?' apply. The tester might then do ad hoc testing, or write up a
limited test plan based on the risk analysis.
144. What if the application has functionality that wasn't in the
requirements?
It may take serious effort to determine if an application has significant unexpected
or hidden functionality, and it would indicate deeper problems in the software
development process. If the functionality isn't necessary to the purpose of the
application, it should be removed, as it may have unknown impacts or
dependencies that were not taken into account by the designer or the customer. If
not removed, design information will be needed to determine added testing needs
or regression testing needs. Management should be made aware of any significant
added risks as a result of the unexpected functionality. If the functionality only
effects areas such as minor improvements in the user interface, for example, it
may not be a significant risk.
145. How can Software QA processes be implemented without stifling
productivity?
By implementing QA processes slowly over time, using consensus to reach
agreement on processes, and adjusting and experimenting as an organization
grows and matures, productivity will be improved instead of stifled. Problem
prevention will lessen the need for problem detection, panics and burn-out will
decrease, and there will be improved focus and less wasted effort. At the same
time, attempts should be made to keep processes simple and efficient, minimize
paperwork, promote computer-based processes and automated tracking and
reporting, minimize time required in meetings, and promote training as part of the
QA process. However, no one - especially talented technical types - likes rules or
bureaucracy, and in the short run things may slow down a bit. A typical scenario
would be that more days of planning and development will be needed, but less
time will be required for late-night bug-fixing and calming of irate customers.
146. What if an organization is growing so fast that fixed QA processes
are impossible?
This is a common problem in the software industry, especially in new technology
areas. There is no easy solution in this situation, other than: - Hire good people Management should 'ruthlessly prioritize' quality issues and maintain focus on the
customer - Everyone in the organization should be clear on what 'quality' means to
the customer
147. How does a client/server environment affect testing?
Client/server applications can be quite complex due to the multiple dependencies
among clients, data communications, hardware, and servers. Thus testing
requirements can be extensive. When time is limited (as it usually is) the focus
should be on integration and system testing. Additionally, load/stress/performance
testing may be useful in determining client/server application limitations and
capabilities. There are commercial tools to assist with such testing.
148.How can World Wide Web sites be tested?
Web sites are essentially client/server applications - with web servers and
'browser' clients. Consideration should be given to the interactions between html
pages, TCP/IP communications, Internet connections, firewalls, applications that
run in web pages (such as applets, javascript, plug-in applications), and
applications that run on the server side (such as cgi scripts, database interfaces,
logging applications, dynamic page generators, asp, etc.). Additionally, there are a
wide variety of servers and browsers, various versions of each, small but
sometimes significant differences between them, variations in connection speeds,
rapidly changing technologies, and multiple standards and protocols. The end
result is that testing for web sites can become a major ongoing effort. Other
considerations might include: - What are the expected loads on the server (e.g.,
number of hits per unit time?), and what kind of performance is required under
such loads (such as web server response time, database query response times).
What kinds of tools will be needed for performance testing (such as web load
testing tools, other tools already in house that can be adapted, web robot
downloading tools, etc.)? - Who is the target audience? What kind of browsers will
they be using? What kind of connection speeds will they by using? Are they intraorganization (thus with likely high connection speeds and similar browsers) or
Internet-wide (thus with a wide variety of connection speeds and browser types)? What kind of performance is expected on the client side (e.g., how fast should
pages appear, how fast should animations, applets, etc. load and run)? - Will down
time for server and content maintenance/upgrades be allowed? how much? - What
kinds of security (firewalls, encryptions, passwords, etc.) will be required and what
is it expected to do? How can it be tested? - How reliable are the site's Internet
connections required to be? And how does that affect backup system or redundant
connection requirements and testing? - What processes will be required to manage
updates to the web site's content, and what are the requirements for maintaining,
tracking, and controlling page content, graphics, links, etc.? - Which HTML
specification will be adhered to? How strictly? What variations will be allowed for
targeted browsers? - Will there be any standards or requirements for page
appearance and/or graphics throughout a site or parts of a site?? - How will
internal and external links be validated and updated? how often? - Can testing be
done on the production system, or will a separate test system be required? How
are browser caching, variations in browser option settings, dial-up connection
variabilities, and real-world internet 'traffic congestion' problems to be accounted
for in testing?- How extensive or customized are the server logging and reporting
requirements; are they considered an integral part of the system and do they
require testing?- How are cgi programs, applets, javascripts, ActiveX components,
etc. to be maintained, tracked, controlled, and tested? - Pages should be 3-5
screens max unless content is tightly focused on a single topic. If larger, provide
internal links within the page. - The page layouts and design elements should be
consistent throughout a site, so that it's clear to the user that they're still within a
site. - Pages should be as browser-independent as possible, or pages should be
provided or generated based on the browser-type. - All pages should have links
external to the page; there should be no dead-end pages. - The page owner,
revision date, and a link to a contact person or organization should be included on
each page.
149. How is testing affected by object-oriented designs?
Well-engineered object-oriented design can make it easier to trace from code to
internal design to functional design to requirements. While there will be little affect
on black box testing (where an understanding of the internal design of the
application is unnecessary), white-box testing can be oriented to the application's
objects. If the application was well-designed this can simplify test design.
150. What is Extreme Programming and what's it got to do with testing?
Extreme Programming (XP) is a software development approach for small teams
on risk-prone projects with unstable requirements. It was created by Kent Beck
who described the approach in his book 'Extreme Programming Explained'. Testing
('extreme testing') is a core aspect of Extreme Programming. Programmers are
expected to write unit and functional test code first - before the application is
developed. Test code is under source control along with the rest of the code.
Customers are expected to be an integral part of the project team and to help
develop scenarios for acceptance/black box testing. Acceptance tests are
preferably automated, and are modified and rerun for each of the frequent
development iterations. QA and test personnel are also required to be an integral
part of the project team. Detailed requirements documentation is not used, and
frequent re-scheduling, re-estimating, and re-prioritizing is expected.
recording of user sessions, and other techniques can be used. Programmers and
testers are usually not appropriate as usability testers. install/uninstall testing testing of full, partial, or upgrade install/uninstall processes. recovery testing testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems. failover testing - typically used interchangeably with
'recovery testing'security testing - testing how well the system protects against
unauthorized internal or external access, willful damage, etc; may require
sophisticated testing techniques. compatibility testing - testing how well software
performs in a particular hardware/software/operating system/network/etc.
environment. exploratory testing - often taken to mean a creative, informal
software test that is not based on formal test plans or test cases; testers may be
learning the software as they test it. ad-hoc testing - similar to exploratory testing,
but often taken to mean that the testers have significant understanding of the
software before testing it. context-driven testing - testing driven by an
understanding of the environment, culture, and intended use of software. For
example, the testing approach for life-critical medical equipment software would
be completely different than that for a low-cost computer game. user acceptance
testing - determining if software is satisfactory to an end-user or customer.
comparison testing - comparing software weaknesses and strengths to competing
products. alpha testing - testing of an application when development is nearing
completion; minor design changes may still be made as a result of such testing.
Typically done by end-users or others, not by programmers or testers. beta testing
- testing when development and testing are essentially completed and final bugs
and problems need to be found before final release. Typically done by end-users or
others, not by programmers or testers. mutation testing - a method for
determining if a set of test data or test cases is useful, by deliberately introducing
various code changes ('bugs') and retesting with the original test data/cases to
determine if the 'bugs' are detected. Proper implementation requires large
computational resources.
154. Why is it often hard for management to get serious about quality
assurance?
Solving problems is a high-visibility process; preventing problems is lowvisibility.This is illustrated by an old parable:In ancient China there was a family of
healers, one of whom was known throughout the land and employed as a physician
to a great lord. The physician was asked which of his family was the most skillful
healer. He replied, "I tend to the sick and dying with drastic and dramatic
treatments, and on occasion someone is cured and my name gets out among the
lords.""My elder brother cures sickness when it just begins to take root, and his
skills are known among the local peasants and neighbors." "My eldest brother is
able to sense the spirit of sickness and eradicate it before it takes form. His name
is unknown outside our home."
155. Why does software have bugs?
1. Miscommunication or no communication - as to specifics of what an application
should or shouldn't do (the application's requirements). 2. Software complexity the complexity of current software applications can be difficult to comprehend for
anyone without experience in modern-day software development. Multi-tiered
Issuestatusthis is how to report bugs in excel sheet and also set filters on the
Columns attributes.But most of the companies use the share point process of
reporting bugs In this when the project came for testing a module wise detail of
project is inserted to the defect management system they are using. It contains
following field1. Date2. Issue brief3. Issue description (used for developer to
regenerate the issue)4. Issue status ( active, resolved, onhold, suspend and not
able to regenerate)5. Assign to (Names of members allocated to project)6. Priority
(High, medium and low)7. severity (Major, medium and low)
158. What are the tables in testplans and testcases?
Test plan is a document that contains the scope, approach, test design and test
strategies. It includes the following:-1. Test case identifier2. Scope3.Features to be
tested4. Features not to be tested.5. Test strategy.6. Test Approach7. Test
Deliverables8. Responsibilities.9 Staffing and Training10.Risk and Contingencies11.
ApprovalWhile A test case is a noted/documented set of steps/activities that are
carried out or executed on the software in order to confirm its
functionality/behavior to certain set of inputs.
159. What are the table contents in testplans and test cases?
Test Plan is a document which is prepared with the details of the testing priority. A
test Plan generally includes: 1. Objective of Testing2. Scope of Testing3. Reason
for testing4. Timeframe5. Environment6. Entrance and exit criteria7. Risk factors
involved8. Deliverables
160. What automating testing tools are you familiar with?
Win Runner , Load runner, QTP , Silk Performer, Test director, Rational robot, QA
run.
161. How did you use automating testing tools in your job?
1. For regression testing2. Criteria to decide the condition of a particular build3.
Describe some problem that you had with automating testing tool.The problem of
winrunner identifying the third party controls like infragistics control.
162. How do you plan test automation?
1. Prepare the automation Test plan2. Identify the scenario3. Record the
scenario4. Enhance the scripts by inserting check points and Conditional Loops5.
Incorporated Error Handler6. Debug the script7. Fix the issue8. Rerun the script
and report the result.
163. Can test automation improve test effectiveness?
Yes, Automating a test makes the test process:1.Fast2.Reliable3.
Repeatable4.Programmable5.Reusable6.Comprehensive6. What is data - driven
automation?Testing the functionality with more test cases becomes laborious as
the functionality grows. For multiple sets of data (test cases), you can execute the
test once in which you can figure out for which data it has failed and for which
data, the test has passed. This feature is available in the WinRunner with the data
driven test where the data can be taken from an excel sheet or notepad.
173. What types of scripting techniques for test automation do you know?
5 types of scripting techniques:LinearStructuredSharedData DrivenKey Driven
174. What are principles of good testing scripts for automation?
1. Proper code guiding standards2. Standard format for defining functions,
exception handler etc3. Comments for functions4. Proper errorhandling
mechanisms5. The appropriate synchronisation techniques18. What tools are
available for support of testing during software development life cycle?Testing
tools for regression and load/stress testing for regression testing like, QTP, load
runner, rational robot, winrunner, silk, testcomplete, Astra are available in the
market. -For defect tracking BugZilla, Test Runner are available.
175. Can the activities of test case design be automated?
As I know it, test case design is about formulating the steps to be carried out to
verify something about the application under test. And this cannot be automated.
However, I agree that the process of putting the test results into the excel sheet.
176. What are the limitations of automating software testing?
Hard-to-create environments like out of memory, invalid input/reply, and
corrupt registry entries make applications behave poorly and existing automated
tools cant force these condition - they simply test your application in normal
environment.
177. What skills needed to be a good test automator?
1.Good Logic for programming.2. Analytical skills.3.Pessimestic in Nature.
178. How to find that tools work well with your existing system?
1. Discuss with the support officials2. Download the trial version of the tool and
evaluate3. Get suggestions from people who are working on the tool
179. Describe some problem that you had with automating testing tool
1. The inability of winrunner to identify the third party control like infragistics
controls2. The change of the location of the table object will cause object not found
error.3. The inability of the winrunner to execute the script against multiple
languages
180. What are the main attributes of test automation?
Maintainability, Reliability, Flexibility, Efficiency, Portability, Robustness, and
Usability - these are the main attributes in test automation.
181. What testing activities you may want to automate in a project?
Testing tools can be used for :* Sanity tests(which is repeated on every build),*
stress/Load tests(U simulate a large no of users, which is manually impossible) &*
Regression tests(which are done after every code change)
182. How to find that tools work well with your existing system?
To find this, select the suite of tests which are most important for your application.
First run them with automated tool. Next subject the same tests to careful manual
testing. If the results are coinciding you can say your testing tool has been
performing.
183. How will you test the field that generates auto numbers of AUT when
we click the button 'NEW" in the application?
We can create a textfile in a certain location, and update the auto generated value
each time we run the test and compare the currently generated value with the
previous one will be one solution.
184. How will you evaluate the fields in the application under test using
automation tool?
We can use Verification points(rational Robot) to validate the fields .Ex.Using
objectdata, objectdata properties VP we can validate fields.
185. Can we perform the test of single application at the same time using
different tools on the same machine?
No. The Testing Tools will be in the ambiguity to determine which browser is
opened by which tool.
186. Difference between Web application Testing and Client Server
Testing. State the different types for Web application Testing and Client
Server Testing types?
which winrunner 7.2 version compatible with internet explorer, firefox
187. What is 'configuration management'?
Configuration management is a process to control and document any changes
made during the life of a project. Revision control, Change Control, and Release
Control are important aspects of Configuration Management.
188. How to test the Web applications?
The basic difference in webtesting is here we have to test for URL's coverage and
links coverage. Using WinRunner we can conduct webtesting. But we have to make
sure that Webtest option is selected in "Add in Manager". Using WR we cannot test
XML objects.
189. What are the problems encountered during the testing the
application compatibility on different browsers and on different operating
systems
Font issues, alignment issues
190. How testing is proceeded when SRS or any other document is not
given?
If SRS is not there we can perform Exploratory testing. In Exploratory testing the
basic module is executed and depending on its results, the next plan is executed.