Manual Testing Interview Question Freshers
Manual Testing Interview Question Freshers
(b) Software complexity - the complexity of current software applications can be difficult
to comprehend for anyone without experience in modern-day software development.
Windows-type interfaces, client-server and distributed applications, data
communications, enormous relational databases, and sheer size of applications have all
contributed to the exponential growth in software/system complexity. And the use of
object-oriented techniques can complicate instead of simplify a project unless it is well-
engineered.
(c) Programming errors - programmers, like anyone else, can make mistakes
(e) Time pressures - scheduling of software projects is difficult at best, often requiring a
lot of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
(f) Poorly documented code - its tough to maintain and modify code that is badly written
or poorly documented; the result is bugs. In many organizations management provides no
incentive for programmers to document their code or write clear, understandable,
maintainable code. In fact, its usually the opposite: they get points mostly for quickly
turning out code, and theres job security if nobody else can understand it (if it was hard to
write, it should be hard to read).
Verification typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meetings. Validation typically involves actual testing and
takes place after verifications are completed. The term IV & V refers to Independent
Verification and Validation.
5. What is a walkthrough?
6. What is an inspection?
C, inspection is more formalized than a walkthrough, typically with 3-8 people including
a moderator, reader, and a recorder to take notes. The subject of the inspection is
typically a document such as a requirements spec or a test plan, and the purpose is to find
problems and see whats missing, not to fix anything. Attendees should prepare for this
type of meeting by reading thru the document; most problems will be found during this
preparation. The result of the inspection meeting should be a written report. Thorough
preparation for inspections is difficult, painstaking work, but is one of the most cost
effective methods of ensuring quality. Employees who are most skilled at inspections are
like the eldest brother in the parable in Why is it often hard for management to get
serious about quality assurance?. Their skill may have low visibility but they are
extremely valuable to any software development organization, since bug prevention is far
more cost-effective than bug detection
a). Black box testing Not based on any knowledge of internal design or code. Tests are
based on requirements and functionality.
b). White box testing - based on knowledge of the internal logic of an applications code.
Tests are based on coverage of code statements, branches, paths, conditions.
c). Unit testing - the most micro scale of testing; to test particular functions or code
modules. Typically done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code. Not always easily done unless the
application has a well-designed architecture with tight code; may require developing test
driver modules or test harnesses.
g). System testing - black-box type testing that is based on overall requirements
specifications; covers all combined parts of a system.
h). End-to-end testing - similar to system testing; the macro end of the test scale; involves
testing of a complete application environment in a situation that mimics real-world use,
such as interacting with a database, using network communications, or interacting with
other hardware, applications, or systems if appropriate.
i). Sanity testing or smoke testing - typically an initial testing effort to determine if a new
software version is performing well enough to accept it for a major testing effort. For
example, if the new software is crashing systems every 5 minutes, bogging down systems
to a crawl, or corrupting databases, the software may not be in a sane enough condition to
warrant further testing in its current state.
j). Regression testing - re-testing after fixes or modifications of the software or its
environment. It can be difficult to determine how much re-testing is needed, especially
near the end of the development cycle. Automated testing tools can be especially useful
for this type of testing.
k). Acceptance testing - final testing based on specifications of the end-user or customer,
or based on use by end-users/customers over some limited period of time.
l). Load testing - testing an application under heavy loads, such as testing of a web site
under a range of loads to determine at what point the systems response time degrades or
fails. stress testing - term often used interchangeably with load and performance testing.
Also used to describe such tests as system functional testing while under unusually heavy
loads, heavy repetition of certain actions or inputs, input of large numerical values, large
complex queries to a database system, etc.
m). Performance testing - term often used interchangeably with stress and load testing.
Ideally performance testing (and any other type of testing) is defined in requirements
documentation or QA or Test Plans.
n). Usability testing - testing for user-friendliness. Clearly this is subjective, and will
depend on the targeted end-user or customer. User interviews, surveys, video recording
of user sessions, and other techniques can be used. Programmers and testers are usually
not appropriate as usability testers.
p). Security testing - testing how well the system protects against unauthorized internal or
external access, willful damage, etc; may require sophisticated testing techniques.
r). Exploratory testing - often taken to mean a creative, informal software test that is not
based on formal test plans or test cases; testers may be learning the software as they test
it.
s). Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers
have significant understanding of the software before testing it.
x). Mutation testing - a method for determining if a set of test data or test cases is useful,
by deliberately introducing various code changes (bugs) and retesting with the original
test data/cases to determine if the bugs are detected. Proper implementation requires large
computational resources.
a). Poor requirements - if requirements are unclear, incomplete, too general, or not
testable, there will be problems.
b). Unrealistic schedule - if too much work is crammed in too little time, problems are
inevitable.
c). Inadequate testing - no one will know whether or not the program is any good until the
customer complains or systems crash.
Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable
Good code is code that works, is bug free, and is readable and maintainable. Some
organizations have coding standards that all developers are supposed to adhere to, but
everyone has different ideas about whats best, or what is too many or too few rules. There
are also various theories and metrics, such as McCabe Complexity metrics. It should be
kept in mind that excessive use of standards and rules can stifle productivity and
creativity. Peer reviews, buddy checks code analysis tools, etc. can be used to check for
problems and enforce standards.
The life cycle begins when an application is first conceived and ends when it is no longer
in use. It includes aspects such as initial concept, requirements analysis, functional
design, internal design, documentation planning, test planning, coding, document
preparation, integration, testing, maintenance, updates, retesting, phase-out, and other
aspects.
A good test engineer has a test to break attitude, an ability to take the point of view of the
customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are
useful in maintaining a cooperative relationship with developers, and an ability to
communicate with both
Its hard to make a decision. Most of the modern applications are very complex and run in
an interdependent circumstance, so the complete testing will never be done. However,
there are some common factors for me to know when to stop testing, which are deadlines,
test cases completed with certain percentage passed, test budget used up, coverage of
functionality and requirements reaches a specified point, bug rate falls below the
specified level, milestone testing ends and so on
Testing work is unlimited, especially in large applications. The relatively enough testing
is just to make application match product requirements and specifications very well,
including functionality, usability, stability, performance and so on.
First, with high speed and efficiency, automated testing can release the manpower from
the complicated and repeated daily tests to spare consumption and time. Second, with
high accuracy, automated testing will never make a mistake just like mankind does under
tiredness after long time testing.
First, since there is no such an automated test tool could replace mankind intelligence,
we need to use manual testing to cover the part that automated testing cant cover. Second,
before the stable version comes out, manual testing is more effective than automated
testing because automated testing may not be completed for system instability, crash for
example.
The selective retesting of a software system that has been modified to ensure that any
bugs have been fixed and that no other previously working functions have failed as a
result of the reparations and that newly added features have not created problems with
previous versions of the software. Regression is also referred to as verification testing.
Severity: It is the impact of the bug on the application. Severity level should be set by
tester. The Severity levels are: Low, Medium, and high, very high and Urgent. It is set by
the tester and it can not be changed.
Priority: How important is it to fix the bug is priority. Priority levels are set by the team
lead or test manager and it can be changed as required.
1. Must fix as soon as possible. Bug is blocking further progress in this area.
2. Should fix soon, before product release.
3. Fix if time; somewhat trivial. May be postponed
Typical "V" shows Development Phases on the Left hand side and Testing Phases on the
Right hand side.