Testing Concepts and Terminology
Testing Concepts and Terminology
Testing Concepts and Terminology
A strategy for software testing must accommodate low level tests that are
necessary to verify that a small source code segment has been correctly
implemented as well as high level tests that validate major system functions
against customer requirements.
Structural test (also known as white box tests) find bugs in low level operations
such as those that occur at the level of line of code / object level. Structural testing
involves a detailed knowledge of the system.
Behavioral tests (also known as black-box tests) are often used to find bugs in
high-level operations, at the levels of features, operational profiles and customer
scenarios.
Testing strategy for testing a software system should encompass both structural
and behavioral testing.
Page 1 of 17
divine QA Testing
Unit Testing
Integration Testing
System Testing
Acceptance Testing
Components, Classes are packaged and integrated to form the complete software
system. Integration testing addresses the issues associated with the dual problems
of verification and program construction. Black-box test case design techniques
Page 2 of 17
divine QA Testing
are the most prevalent during integration, although a limited amount of white -
box testing may be used to ensure coverage of major control paths.
During the various phases of testing different types of test can be conducted to
test a specific view or aspect of the system. The following tests can be conducted
during unit, integration, system and acceptance testing.
Functionality Testing:
During unit testing each component or object is tested for completeness and
correctness of implementation of a specific feature of functionality. In integration
testing we focus on functionality that requires the correct operation of two or
more components or a flow of data between them.
Page 3 of 17
divine QA Testing
User interface of a project is the first thing a user encounters in a product. While
the product may perform its intended function correctly, if the UI behaves
incorrectly, does not refresh properly or over writes meaningful text on the screen
inadvertently the user can be seriously annoyed.
User interface testing during the unit testing phase often involves only a select
few UI objects or specific screen or page. During integration testing we need to
focus on navigation/page flow across several screens/pages. In system testing we
need to test the complete navigation required to meet client requirements. In
addition to testing the normal navigation with reference to various types of users,
it is necessary also to test for negative behavior.
Although a product may appear to function properly when tested by a single user,
when multiple instances of a product run concurrently undesirable results may
occur: they may show up in the application or in the operating system. A product
that can have many instances of it running at the same time in parallel should
undergo these concurrent execution tests:
• Simple usage (Two users)
• Standard usage (many users)
• Boundary situations (maximum number of users plus or minus one) if this
limit exists
Page 4 of 17
divine QA Testing
Dependency Testing
Dependency testing is recommended to test any API calls made to other products
and ensure that these calls act as promised. Such tests also provide a good
regression mechanism when the product being developed is integrated with the
new version of products it depends on. We need to test all interactions between
products, including error cases. In general, any where data or control is
transferred from one component to another component (or components), either
immediately or in delayed fashion, an interface exists that can cause trouble.
The product must be compatible (to the published extent) with any previous
release and supported dependencies. Backward compatibility testing ensures that
no problem will arise from a user trying to use the new version of a product along
with an old version and verifies integration between products.
The user documentation is part of the product. If a new user is unable to use the
software because of poor documentation, the product as a whole is unusable. All
sections of the user documentation should be tested for accuracy against the
reality of the product. Ideally user documentation should be held under source
control and made part of regression testing of the product. Verify as corrected and
complete all demos, tutorials and exercise described in the user documentation.
Specific set of industry standards has to be tested. Tests need to be planned and
conducted to detect bugs in standards conformance that could seriously affect the
prospects of the product in the market.
Page 5 of 17
divine QA Testing
During unit testing, from a structural test perspective, every buffer, occurrence,
storage resource, processor, bus, and I/O channel in the system has to be tested. In
integration testing aspects related to capacity of network to handle traffic and its
behavior under various volumes are tested. Similarly the data storage capability is
tested. During system testing the capacity and volume limitations are tested from
a user's point of view.
Localization
Security Testing
Security testing attempts to verify that protection mechanism built into a system
will, in fact protect it from improper penetration. During security testing the tester
plays the role of the individual who desires to penetrate the system. The tester
would attempt to acquire passwords, access the system through some
indirect/direct means, overwhelm the system so that it is not available to others,
cause system errors, etc.
Performance Testing
Any product, no matter how well constructed and reliable, is worthless if its
performance does not meet the demands of its users. Testing must be done in
realistic setting to ensure adequate performance. However, we have to be cost
effective.
We need to test how the product performs during operations normally executed by
the users regularly. If performance goals are set for various operations of the
product, it is necessary to verify that these goals are met. Performance is not only
"How many per second" but also" how long".
Smoke Testing
Page 6 of 17
divine QA Testing
pacing mechanism for time-critical projects, allowing the software team to assess
its projects on a frequent basis.
Software components produced are integrated into a "build". A series of tests are
designed to expose errors that will keep the build from properly performing its
function. The intent is to uncover "Show stopper" errors that have the highest
likelihood of throwing the software project behind schedule. Sometimes daily
builds are made and subjected to smoke testing.
The smoke test should exercise the entire system from end to end. It does not have
to be exhaustive, but it should be capable of exposing major problems. The smoke
test should be thorough enough that if the build passes you can assume that it is
stable enough to be tested more thoroughly.
Regression Testing
Regression testing is the re-execution of some subset or all tests that have been
previously conducted to ensure that changes have not propagated unintended side
effects. Regression testing is the activity that helps to ensure that changes do not
introduce unintended behavior or additional errors. When sub sets of test are
selected for regression testing, they should contain:
• A representative sample of test that will exercise all software functions
• Additional tests that focus on software functions that are likely to be affected
by the change
• Tests that focus on the software components that have been changed.
A software system must recover from faults and resume processing within a
prescribed time. In some cases, a system must be fault tolerant: that is, processing
faults must not cause overall system function to cease. In other cases, a system
failure must be corrected within a specific period of time.
Recovery testing forces the software to fail in a variety of ways and verifies that
recovery is properly performed. If recovery is automatic then re-initialization,
check pointing mechanisms, data recovery and restart are evaluated for
correctness. If recovery requires human intervention, the mean-time -to -repair
(MTTR) is evaluated to determine whether it is with in acceptable limits.
Page 7 of 17
divine QA Testing
Quality risks in this area include unacceptable failure rates, unacceptable recovery
times and the inability of the system to function under legitimate conditions
without failure.
A customer conducts the alpha test at the developer site. The software is used in a
natural setting with the developer recording the errors and usage problems. Alpha
tests are conducted in a controlled environment.
The beta test is conducted at one or more customer sites by the end-user of the
software unlike alpha testing the developer is not present. Beta test is conducted at
live sites not controlled by the developer. The customer records all problems
encountered during beta and reports them to the developer
Stress Testing
Stress testing is the search for load dependent bugs that occur only when the
system is stressed and the specification of stress tests is a key activity. Stress
Testing increases the overall quality and reliability of software. Stress-related
bugs are the most difficult to find and exhume.
Stress testing with high background load, to a point where one or more or all
resources are simultaneously saturated. It is to system testing what destructive
testing is to physical objects.
Page 8 of 17
divine QA Testing
Some of the reasons why stress-related bugs are encountered in system are:
The various types of tests that one can apply to web application testing are:
• Unit Testing - testing of the individual modules and pages that make up the
application.
• Page Flow Testing - ensures that pages that have a preceding page
requirement cannot be visited without having first visited the preceding page.
• Usability Testing – ensures that all pages present cohesive information to the
user. It is recommended that both end users and UI designers perform this
testing, and that this testing be done very early during the development cycle
so as to gain early user acceptance of the user interfaces and/or visual
semantics presented.
• Performance Testing – check that both the hardware (including firewalls,
routers, etc.,) and software scale to meet variant demands in user load.
• Data Volume Testing – Moving large amounts of data through the application,
either directly via input/output to a web page, or directly as input/output from
database that is referenced by a web page.
• Security Testing – ensuring that sensitive information is not presented in clear
text form, unauthorized access is not permitted, etc.,
• Regression Testing – re-testing specific parts or all of the application to ensure
that addition of new features has not affected other parts of the application.
• External Testing – deals with checking the effect of external factors on the
application. Example of external factors could be the web server, the database
server, the browser, network connectivity issues, etc.,
Page 9 of 17
divine QA Testing
• Business Logic Testing – checks that the applications' business functions work
accurately and perform according to constraints detailed in the requirements
document.
Some of the categories of testing suggested might not be applicable for certain
types of web enabled applications; the development/test teams(s) should make a
determination as to whether a test is or is not strictly required. If not, then the test
should be defined, but simply marked as optional or not required .The rationale
for stating a test is not required is simply to provide a record of the fact that the
condition requiring the test was considered, but deemed inappropriate for this
application.
Unit Testing
Unit testing involves testing of the individual modules and pages that make up the
application. Typical tests that fall in to this category are: Input tests (good input,
bad input, out of range input, excessive input, multiple clicks on buttons, reload of
pages prior to results being returned, etc.,). The deliberate attempt to confuse the
application can also be referred to as edge testing.
• Invalid input (Missing output, out of bound input, entering an integer when
float expected, and vice versa, control characters in strings etc.,)
• Alternate Input Format (e.g., 0 instead of 0.0, 0.00000001 instead of 0 etc.,)
• Button click testing e.g., multiple clicking with and without pauses between
clicks.
• Immediate reload after button click prior to response having been received.
• Multiple reloads in the same manner as above.
In general, unit tests check the behavior of a given page (i.e. does the application
behave correctly and consistently given either good or bad input). Random input
and random click testing also falls under the domain of unit testing. This testing
involves a user randomly pressing buttons (including multiple clicks on "hrefs")
and randomly picking checkboxes and selecting them. There are two forms of
output screen expected:
Page 10 of 17
divine QA Testing
Page flow testing deals with ensuring that jumping to random pages does not
confuse the application. Each page should typically check to ensure that it can
only be viewed via specific previous pages, and if the referring page was not one
of that set, then an error page should be displayed. A page flow diagram is a very
useful aid for the tester to use when checking for correct page flow within the
application.
Other aspects of page flow testing cross into other areas of testing, such as
security. Some simple checks to consider are forcing the application to move in an
unnatural path, the application must resist, and display appropriate error message.
Page flow testing involves logging into the system and then attempting to jump to
any page in any order once a session has been established. Security testing
encompasses several areas discussed later in this section as well as:
The use of bookmarks, temporary web pages setup to redirect into the middle of
an application using faked session information, etc., are all valid ways of testing
page flow and session security.
Usability Testing
Usability testing ensures that all pages present a cohesive look to the user,
including spelling, graphics, page size, response time, etc. This testing also
encompasses performance testing to some degree, as to load and regression
testing. Examples of usability testing include:
• Spelling checks
• Graphics checks (colors, dithering, aliasing, size, etc.,)
• Meaningful error messages
• Accuracy of data displayed
• Accuracy of data in the database as a result of user input
• Accuracy of data in the database as a result of external factors (e.g. imported
data)
• Meaningful help pages including context sensitive help
Load testing the application involves generation of varying loads against not only
the web server but also the databases supporting the web server and the middle
ware/application server logic connecting those pages to the databases. Load
Page 11 of 17
divine QA Testing
testing also includes verification of data integrity on the web pages, within the
back end database and also the load ramping or surges in activity against the
application. In particular, attention should be paid to pages that include large
amounts of data and what happens if multiple users hit these pages concurrently.
Some of the questions to be asked are: "Does the site scale ", "Is the site's
response time deterministic, etc.
Examples of load testing would include:
• Sustained low load test (50 users for around 48 hours).
• Sustained high load test (300+ users for 12 hours).
• Surge test (e.g. run 50 users, then surge to 500 users and then return to 50,no
memory leaks, lost users, orphaned processes, etc., should be seen). The
system should continue running with multiple surges at various times during
the day. This test should run for 48 hour.
Another very important facet of load testing is to discover at what load the
application would fail and what are the saturation point. All applications
eventually degrade given sufficient load. The collection of this degradation point
data is crucial since it may be used as a control set for monitoring data during live
use of the application. Predictive analysis made on live monitored data can
provide indications of when application stress points may appear on the horizon,
and proactive steps may be taken to ensure that the application and/or hardware
on which it runs scale up to meet new demand.
In case if major architectural changes occur in the application, (for example a total
face lift which makes it more usable), then the load tests must be re-run and a new
control set of performance data should be gathered, since the original control set
of data becomes invalid due to the architectural changes.
Table below illustrates loading estimates. This table is important in that it gives
the tester an indication of what is considered low, medium and high load. The test
scripts to generate these loads can then be constructed accordingly. The table
should accurately reflect the real load requirements. For example the high load
number could be as high as 5000 concurrent users or as low as only a dozen or so
concurrent users.
Data volume testing involves testing the application under data load, where large
quantities of data are passed through the system. (e.g. large number of items in
dropdown/combo boxes, large amount of data in text boxes). Performance of the
Page 12 of 17
divine QA Testing
application should be monitored during this testing, since a slow database could
significantly affect response time.
A key point to be noted is that one should also monitor all the systems involved in
data movement. The data collected from the monitoring is valuable indicator of
system performance when coupled with the test load being delivered. This data
can be used as a control set for contrasting monitoring data from a live system and
providing predictive information indicating when major application stress points
may be encountered.
No errors should be seen on application pages or in error logs for pages that are
data intensive.
Security Testing
Security testing involves verifying weather both the servers and the application
are managing security correctly. All these tests must pass with no exceptions. A
partial pass must be recorded as a " Fail" since unauthorized access to data is a
major breach of security. Other forms of security testing involve checking not
only the electronic access security but also the physical security of the servers.
Some of the items that may be checked from a server perspective during this
testing are
• Attempt to penetrate system security both internally and externally to ensure
the system that houses the application is secure from bother internal and
external attacks.
• Attempt to cause things like buffer overflow to result in root access being
given accidentally, (such code does exist, but explaining it is beyond the scope
of this document).
• Attempt to cause the application to crash by giving it false or random
information.
• Ensure that the server OS is up to correct patch levels from security
viewpoint.
• Ensure that the server is physically secure.
Application level security testing involves testing some or all the following
• Unauthenticated access to the application
• Unauthorized access to the application
• Unencrypted data passing
• Protection of the data
• Log files should be checked to ensure they do not contain sensitive
information
• Faked sessions. Sessions information must be valid and secure. (e.g. a URL
containing a session identifier cannot be copied from one system to another
and then the application be continued from the different system without being
detected)
• Multiple login testing by a single user from several clients.
Page 13 of 17
divine QA Testing
Regression Testing
Regression testing ensures that during the lifetime of the application, any fixes do
not break other parts of the application. This type of testing typically involves
running all the other tests, or a relevant subset of those tests. A multidisciplinary
team consisting of developers, administrators and load testers would normally
perform this test. If UI changes were made, then UI designers and end–users need
to be involved. The regression tests must also be kept up to date with planned
changes in the application. As the application evolves, so must the tests.
External testing deals with checking the effect of external factors on the
application. Example of external factors would be the web server, the database
server, the browser, network connectivity issues, etc., Examples of external testing
are:
Page 14 of 17
divine QA Testing
• Forcing the browser to prematurely terminate during a page load using a task
manager to kill the browser, or hitting the ESC key and reloading or revisiting
the same page via a bookmark. The testing should cover both a small delay (<
10secs) in reinstating the browser as well as a long delay (> 10mins). In the
latter case the user should not be able to connect back to the application
without being redirected to the login page.
• Simulation of Hub Failure between PC and the Web Server. Removing the
network cable from the PC, attempt to visit a page; abort the visit, and then
reconnect the cable can simulate this. The test should use two time delays; the
first should be under 15 seconds, and the second delay around 15 minutes
before reconnecting. After reconnecting, attempt to reload the previous page.
The user should be able to continue with the session unless a specified timeout
has occurred in which case the user should be redirected to a login page.
• Web server on/off test. Shutdown the web server, then restart the server. The
user should be able to connect back to the application without being redirected
to the login page; this will prove the statelessness of individual pages. Note
the shutdown is only for the web server. Do not attempt this with an
application server, as that is a separate test.
• Database server on/off test. Shutdown the database server and restart it. The
user should be able to connect back to the application without being redirected
to the login page. It may be that a single transaction needs to be redone, and
the application should detect this and react accordingly.
• Application server on/off test. Shutdown the application server and restart it.
There are two possible outcomes for this depending on how session
management is implemented. The first outcome is that the application
redirects to an error page indicating loss of connectivity, and the user is
requested to login and retry. The second outcome is the application continues
normally since no session information was lost because it was held in a
persistent state that transcends application server restarts.
Page 15 of 17
divine QA Testing
The items that should be checked carefully are layout differences, JavaScript
behavior, image behavior, etc.,
Extended Session testing should involve checking some or all of the following
items:
• Remaining in a session for an extended period of time and click items to
navigate the screen. The session must not be terminated by the server except
in the case of a deliberate logout initiated by the user.
• Remaining on a single page for an extended length of time. The session
should be automatically terminated and the next click by the user should take
the user to a page indicating why the session was terminated and the option to
log back into the system should be present. The page may have timed redirect
associated with it, and if so, a page indicating a timed out session should be
displayed. The following must be tested
• The user's session should have been saved and may optionally be restored on
re login
• The user's state must reflect the last complete action the user performed
• Leaving the application pages to visit another site or application and then
returning to original application via a bookmark or the back button should
result in a restoration of state, and the application should continue as if the
person had not left.
Power Hit/Cycle testing involves determining if the servers and clients act
appropriately during the recovery process. The testing will be difficult to perform
from a server perspective since it is expected that the servers will be operating
with standby power supplies as well as being in a highly available configuration.
(i.e., hot spares, RAID, application fail over, etc.) However, the clients can be
tested this way by removing power from the PC's. Below are the tests performed.
Page 16 of 17
divine QA Testing
All the tests listed previously may pass with flying colors, but if the business
logic is not encapsulated by the application, then the application has not met its
requirements. Business requirements vary in nature and are thus difficult to
generalize.
The best way to quantify the business is to start with an existing business process
model, and extrapolate. The model can derive specific requirement for the web
application, and may or may not involve significant back end mainframe or mini
computer technologies. The key is that the web application and middle ware logic
meets the overall business requirements. Representative from the business team
must participate in defining these requirements.
Electronic equipment such as televisions, cell phones, etc, use test points to which
probes may be attached and measurements taken to determine if the equipment is
behaving within specific tolerance levels. The same approach may be taken for
software, and will be a valuable aid in determining application success. This is
where the majority of the test teams time will be taken, automated tools may be
purchased to augment load testing etc, but the tools to test specific points in the
application must be written, or the data examined by hand.
The test team is responsible for constructing valid set of tests, scripts, and tools
that may be generated to facilitate testing. These tools will become the core tools
for performing regression testing as the application's capability increase over
time.
Page 17 of 17