Unit 1 Stqa
Unit 1 Stqa
Unit 1 Stqa
What is testing?
Software Tester
Software Developer
Project Lead/Manager
End User
Different companies have difference designations for people who test the software on the
basis of their experience and knowledge such as Software Tester, Software Quality
Assurance Engineer, and QA Analyst etc.
It is not possible to test the software at any time during its cycle. The next two sections
state when testing should be started and when to end it during the SDLC.
Testing is done in different forms at every phase of SDLC like during Requirement gathering
phase, the analysis and verifications of requirements are also consideredtesting.
Page 1
Reviewing the design in the design phase with intent to improve the design isalso
considered as testing. Testing performed by a developer on completion of the code is also
categorized as Unit type of testing.
Verification Validation
Are you building it right? Are you building the right thing?
Ensure that the software system meets all Ensure that functionalities meet the
the functionality. intended behavior.
Verification takes place first and includes Validation occurs after verification and
the checking for documentation, code etc. mainly involves the checking of the overall
product.
Done by developers. Done by Testers.
Have static activities as it includes the Have dynamic activities as it includes
reviews, walkthroughs, and inspections to executing the software against the
verify that software is correct or not. requirements.
It is an objective process and no subjective It is a subjective process and involves
decision should be needed to verify the subjective decisions on how well the
Software. Software works.
Page 2
Difference between Testing, Quality Assurance
and Quality Control
Most people are confused with the concepts and difference between Quality Assurance,
Quality Control and Testing. Although they are interrelated and at some level they can be
considered as the same activities, but there is indeed a difference between them.Mentioned
below are the definitions and differences between them:
Inspection: A formal technique which involves the formal or informal technical reviews
of any artifact by identifying any error or gap. Inspection includes the formal as well as
informal technical reviews. As per IEEE94, Inspection is a formal evaluation technique in
which software requirements, design, or code are examined in detail by a person orgroup
other than the author to detect faults, violations of development standards, and other
problems.
Formal Inspection meetings may have following process: Planning, Overview Preparation,
Inspection Meeting, Rework, and Follow-up.
Page 3
Debugging: It involves identifying, isolating and fixing the problems/bug. Developers who
code the software conduct debugging upon encountering an error in the code. Debugging
is the part of White box or Unit Testing. Debugging can be performed in the development
phase while conducting Unit Testing or in phases while fixing the reported bugs.
Testing Myths
Given below are some of the more popular and common myths about Software testing.
Reality: There is a saying, pay less for testing during software development or pay more
for maintenance or correction later. Early testing saves both time and cost in many aspects
however, reducing the cost without testing may result in the improper design ofa software
application rendering the product useless.
Reality: No doubt, testing depends on the source code but reviewing requirements and
developing test cases is independent from the developed code. However iterative or
incremental approach as a development life cycle model may reduce the dependency of
testing on the fully developed software.
Reality: It becomes an issue when a client or tester thinks that complete testing is
possible. It is possible that all paths have been tested by the team but occurrence of
complete testing is never possible. There might be some scenarios that are neverexecuted
by the test team or the client during the software development life cycle and may be
executed once the project has been deployed.
Reality: This is a very common myth which clients, Project Managers and the management
team believe in. No one can say with absolute certainty that a software application is 100%
bug free even if a tester with superb testing skills has tested the application.
Reality: It is not a correct approach to blame testers for bugs that remain in the application
even after testing has been performed. This myth relates to Time, Cost, and Requirements
changing Constraints. However the test strategy may also result in bugs being missed by
the testing team.
Reality: It is a very common misinterpretation that only testers or the testing team should
be responsible for product quality. Tester’s responsibilities include the identification of bugs
to the stakeholders and then it is their decision whether they will fix
Page 4
the bug or release the software. Releasing the software at the time puts more pressure
on the testers as they will be blamed for any error.
Myth: Test Automation should be used wherever it is possible to use it and to reduce time.
Reality: Yes it is true that Test Automation reduces the testing time but it is not possible
to start Test Automation at any time during Software development. Test Automaton should
be started when the software has been manually tested and is stable to some extent.
Moreover, Test Automation can never be used if requirements keep changing.
Reality: People outside the IT industry think and even believe that any one can test the
software and testing is not a creative job. However testers know very well that this is myth.
Thinking alternatives scenarios, try to crash the Software with the intent to explore
potential bugs is not possible for the person who developed it.
Reality: Finding bugs in the Software is the task of testers but at the same time they are
domain experts of the particular software. Developers are only responsible for the specific
component or area that is assigned to them but testers understand the overall workings
of the software, what the dependencies are and what the impacts of one module on
another module are.
The above mentioned quality attributes are further divided into sub-factors. These sub
characteristics can be measured by internal or external metrics as shown in the graphical
depiction of ISO-9126 model.
Page 5
ISO/IEC 9241-11: Part 11 of this
standard deals with the extent to
which a product can be used by
specified users to achieve specified
goals with Effectiveness, Efficiency and
Satisfaction in a specified context of
use.
This standard proposed a framework
which describes the usability
components and relationship between
them. In this standard the usability is
considered in terms of user
performance and satisfaction.
According to ISO 9241-11 usability
depends on context of use and the level of usability will change as the context changes.
ISO/IEC 25000: ISO/IEC 25000:2005 is commonly known as the standard which gives
the guidelines for Software product Quality Requirements and Evaluation (SQuaRE). This
standard helps in organizing and enhancing the process related to Software quality
requirements and their evaluations. In reality, ISO-25000 replaces the two old ISO
standards i.e. ISO-9126 and ISO-14598.
ISO/IEC 12119: This standard deals with Software packages delivered to the client. It
does not focus or deal with the client’s (the person/organization whom Software is
delivered) production process. The main contents are related to the following items:
IEEE 829: A standard for the format of documents used in different stages of software
testing.
IEEE 1061: A methodology for establishing quality requirements, identifying,
implementing, analyzing, and validating the process and product of software quality
metrics is defined.
IEEE 1059: Guide for Software Verification and Validation Plans.
IEEE 1008: A standard for unit testing.
IEEE 1012: A standard for Software Verification and Validation.
IEEE 1028: A standard for software inspections.
IEEE 1044: A standard for the classification of software anomalies.
IEEE 1044-1: A guide to the classification of software anomalies.
IEEE 830: A guide for developing system requirements specifications.
Page 6
IEEE 730: A standard for software quality assurance plans.
IEEE 1061: A standard for software quality metrics and methodology.
IEEE 12207: A standard for software life cycle processes and life cycle data.
BS 7925-1: A vocabulary of terms used in software testing.
BS 7925-2: A standard for software component testing.
Testing Types
Manual Testing
T his type includes the testing of the Software manually i.e. without using any
automated tool or any script. In this type the tester takes over the role of an end user and
test the Software to identify any un-expected behavior or bug. There are different stages
for manual testing like unit testing, Integration testing, System testing and User
Acceptance testing.
Testers use test plan, test cases or test scenarios to test the Software to ensure the
completeness of testing. Manual testing also includes exploratory testing as testers explore
the software to identify errors in it.
Automation Testing
Test
Script
Test
Execution
Test
Automation
Automation testing which is also known as “Test Automation”, is when the tester writes
scripts and uses another software to test the software. This process involves automation
Page 7
of a manual process. Automation Testing is used to re-run the test scenarios that were
performed manually, quickly and repeatedly.
Apart from regression testing, Automation testing is also used to test the application
from load, performance and stress point of view. It increases the test coverage; improve
accuracy, saves time and money in comparison to manual testing.
Furthermore all GUI items, connections with databases, field validations etc. can be
efficiently tested by automating the manual process.
When to Automate: Test Automation should be uses by considering the following for
the Software:
Large and critical projects.
Projects that require testing the same areas frequently.
Requirements not changing frequently.
Accessing the application for load and performance with many virtual users.
Stable Software with respect to manual testing.
Availability of time.
Following are the tools which can be used for Automation testing:
Page 8
Testing Methods
the application is Black Box testing. The tester is oblivious to the system architecture and
does not have access to the source code. Typically, when performing a black box test, a
tester will interact with the system’s user interface by providing inputs and examining
outputs without knowing how and where the inputs are worked upon.
Advantages:
Disadvantages:
Limited Coverage since only a selected number of test scenarios are actually
performed.
Inefficient testing, due to the fact that the tester only has limited knowledge
about an application.
Blind Coverage, since the tester cannot target specific code segments or error
prone areas.
The test cases are difficult to design.
Page 9
white box testing on an application, the tester needs to possess knowledge of the internal
working of the code. The tester needs to have a look inside the source code and find out
which unit/chunk of the code is behaving inappropriately.
Advantages:
As the tester has knowledge of the source code, it becomes very easy to find out which
type of data can help in testing the application effectively.
Disadvantages:
Due to the fact that a skilled tester is needed to perform white box testing, the
costs are increased.
Sometimes it is impossible to look into every nook and corner to find out hidden
errors that may create problems as many paths will go untested.
It is difficult to maintain white box testing as the use of specialized tools like
code analyzers and debugging tools are required.
Mastering the domain of a system always gives the tester an edge over someone with
limited domain knowledge. Unlike black box testing, where the tester only tests the
application’s user interface, in grey box testing, the tester has access to design documents
and the database. Having this knowledge, the tester is able to better prepare test data and
test scenarios when making the test plan.
Advantages:
Offers combined benefits of black box and white box testing wherever possible.
Grey box testers don’t rely on the source code; instead they rely on interface
definition and functional specifications.
Based on the limited information available, a grey box tester can design excellent
test scenarios especially around communication protocols and data type handling.
The test is done from the point of view of the user and not the designer.
Disadvantages:
Since the access to source code is not available, the ability to go over the code
and test coverage is limited.
The tests can be redundant if the software designer has already run a test case.
Testing every possible input stream is unrealistic because it would take an
unreasonable amount of time; therefore, many program paths will go untested.
Page 10
Not Fully
Known Known Known
Page 11
Levels of Testing
L evels of testing include the different methodologies that can be used while
conducting Software Testing. Following are the main levels of Software Testing:
Functional Testing.
Non- functional Testing.
Functional Testing
This is a type of black box testing that is based on the specifications of the software that
is to be tested. The application is tested by providing input and then the results are
examined that need to conform to the functionality it was intended for. Functional Testing
of the software is conducted on a complete, integrated system to evaluate the system's
compliance with its specified requirements. There are five steps that are involved when
testing an application for functionality.
An effective testing practice will see the above steps applied to the testing policies of every
organization and hence it will make sure that the organization maintains the strictest of
standards when it comes to software quality.
Unit Testing
This type of testing is performed by the developers before the setup is handed over to
the testing team to formally execute the test cases. Unit testing is performed by the
respective developers on the individual units of source code assigned areas. The developers
use test data that is separate from the test data of the quality assurance team.
The goal of unit testing is to isolate each part of the program and show that individual parts
are correct in terms of requirements and functionality.
Page 12
testing. There is a limit to the number of scenarios and test data that the developer can
use to verify the source code. So after he has exhausted all options there is no choice but
to stop unit testing and merge the code segment with other units.
Integration Testing
The testing of combined parts of an application to determine if they function correctly
together is Integration testing. There are two methods of doing Integration Testing Bottom-
up Integration testing and Top Down Integration testing.
System Testing
This is the next level in the testing and tests the system as a whole. Once all the
components are integrated, the application as a whole is tested rigorously to see that it
meets Quality Standards. This type of testing is performed by a specialized testing team.
System Testing is the first step in the Software Development Life Cycle, where
the application is tested as a whole.
The application is tested thoroughly to verify that it meets the functional and
technical specifications.
The application is tested in an environment which is very close to the
production environment where the application will be deployed.
System Testing enables us to test, verify and validate both the business
requirements as well as the Applications Architecture.
Page 13
Regression Testing
Whenever a change in a software application is made it is quite possible that other areas
within the application have been affected by this change. To verify that a fixed bug hasn’t
resulted in another functionality or business rule violation is Regression testing. The
intent of Regression testing is to ensure that a change, such as a bug fix did not result in
another fault being uncovered in the application.
Minimize the gaps in testing when an application with changes made has to be
tested.
Testing the new changes to verify that the change made did not affect any
other area of the application.
Mitigates Risks when regression testing is performed on the application.
Test coverage is increased without compromising timelines.
Increase speed to market the product.
Acceptance Testing
This is arguably the most importance type of testing as it is conducted by the Quality
Assurance Team who will gauge whether the application meets the intended specifications
and satisfies the client’s requirements. The QA team will have a set of pre written scenarios
and Test Cases that will be used to test the application.
More ideas will be shared about the application and more tests can be performed on it to
gauge its accuracy and the reasons why the project was initiated. Acceptance tests are not
only intended to point out simple spelling mistakes, cosmetic errors or Interface gaps,
but also to point out any bugs in the application that will result in system crashers or major
errors in the application.
By performing acceptance tests on an application the testing team will deduce how the
application will perform in production. There are also legal and contractual requirements
for acceptance of the system.
Alpha Testing
This test is the first stage of testing and will be performed amongst the teams (developer
and QA teams). Unit testing, integration testing and system testing when combined are
known as alpha testing. During this phase, the following will be tested in the application:
Spelling Mistakes
Broken Links
Cloudy Directions
The Application will be tested on machines with the lowest specification to test
loading times and any latency problems.
Beta Testing
This test is performed after Alpha testing has been successfully performed. In beta
testing a sample of the intended audience tests the application. Beta testing is also
known as pre-release testing. Beta test versions of software are ideally distributed to a
wide audience on the Web, partly to give the program a "real-world" test and partly to
provide a preview of the next release. In this phase the audience will be testing the
following:
Users will install, run the application and send their feedback to the project
team.
Page 14
Typographical errors, confusing application flow, and even crashes.
Getting the feedback, the project team can fix the problems before releasing the
software to the actual users.
The more issues you fix that solve real user problems, the higher the quality of
your application will be.
Having a higher-quality application when you release to the general public will
increase customer satisfaction.
Non-Functional Testing
This section is based upon the testing of the application from its non-functional attributes.
Non-functional testing of Software involves testing the Software from the requirements
which are non-functional in nature related but important a well such as performance,
security, user interface etc. Some of the important and commonly used non-functional
testing types are mentioned as follows.
Performance Testing
It is mostly used to identify any bottlenecks or performance issues rather than finding
the bugs in software. There are different causes which contribute in lowering the
performance of software:
Network delay.
Client side processing.
Database transaction processing.
Load balancing between servers.
Data rendering.
Performance testing is considered as one of the important and mandatory testing type in
terms of following aspects:
It can be either qualitative or quantitative testing activity and can be divided into different
sub types such as Load testing and Stress testing.
Load Testing
A process of testing the behavior of the Software by applying maximum load in terms of
Software accessing and manipulating large input data. It can be done at both normal and
peak load conditions. This type of testing identifies the maximum capacity of Software and
its behavior at peak time.
Most of the time, Load testing is performed with the help of automated tools such as Load
Runner, AppLoader, IBM Rational Performance Tester, Apache JMeter, Silk Performer,
Visual Studio Load Test etc.
Virtual users (VUsers) are defined in the automated testing tool and the script isexecuted
to verify the Load testing for the Software. The quantity of users can be increased or
decreased concurrently or incrementally based upon the requirements.
Page 15
Stress Testing
This testing type includes the testing of Software behavior under abnormal conditions.
Taking away the resources, applying load beyond the actual load limit is Stress testing.
The main intent is to test the Software by applying the load to the system and taking over
the resources used by the Software to identify the breaking point. This testing canbe
performed by testing different scenarios such as:
Usability Testing
This section includes different concepts and definitions of Usability testing from Software
point of view. It is a black box technique and is used to identify any error(s) and
improvements in the Software by observing the users through their usage and operation.
According to Nielsen, Usability can be defined in terms of five factors i.e. Efficiency of
use, Learn-ability, Memor-ability, Errors/safety, satisfaction. According to him the
usability of the product will be good and the system is usable if it possesses the above
factors.
Nigel Bevan and Macleod considered that Usability is the quality requirement which
can be measured as the outcome of interactions with a computer system. This
requirement can be fulfilled and the end user will be satisfied if the intended goals are
achieved effectively with the use of proper resources.
Molich in 2000 stated that user friendly system should fulfill the following five goals i.e.
Easy to Learn, Easy to Remember, Efficient to Use, Satisfactory to Use and Easy
to Understand.
In addition to different definitions of usability, there are some standards and quality models
and methods which define the usability in the form of attributes and sub attributes such
as ISO-9126, ISO-9241-11, ISO-13407 and IEEE std.610.12 etc.
UI testing involves the testing of Graphical User Interface of the Software. This testing
ensures that the GUI should be according to requirements in terms of color, alignment,
size and other properties.
On the other hand Usability testing ensures that a good and user friendly GUI is designed
and is easy to use for the end user. UI testing can be considered as a sub part of Usability
testing.
Security Testing
Security testing involves the testing of Software in order to identify any flaws ad gaps from
security and vulnerability point of view. Following are the main aspects which Security
testing should ensure:
Confidentiality.
Integrity.
Authentication.
Availability.
Page 16
Authorization.
Non-repudiation.
Software is secure against known and unknown vulnerabilities.
Software data is secure.
Software is according to all security regulations.
Input checking and validation.
SQL insertion attacks.
Injection flaws.
Session management issues.
Cross-site scripting attacks.
Buffer overflows vulnerabilities.
Directory traversal attacks.
Portability Testing
Portability testing includes the testing of Software with intend that it should be re- useable
and can be moved from another Software as well. Following are the strategies that can be
used for Portability testing.
Portability testing can be considered as one of the sub parts of System testing, as this
testing type includes the overall testing of Software with respect to its usage over different
environments. Computer Hardware, Operating Systems and Browsers are the major focus
of Portability testing. Following are some pre-conditions for Portabilitytesting:
Page 17
Testing Documentation
developed before or during the testing of Software. Documentation for Software testing
helps in estimating the testing effort required, test coverage, requirement tracking/tracing
etc. This section includes the description of some commonly used documented artifacts
related to Software testing such as:
Test Plan
Test Scenario
Test Case
Traceability Matrix
Test Plan
A test plan outlines the strategy that will be used to test an application, the resources
that will be used, the test environment in which testing will be performed, the limitations
of the testing and the schedule of testing activities. Typically the Quality Assurance Team
Lead will be responsible for writing a Test Plan. A test plan will include the following.
Page 18
Test Scenario
A one line statement that tells what area in the application will be tested. Test Scenarios
are used to ensure that all process flows are tested from end to end. A particular area of
an application can have as little as one test scenario to a few hundred scenariosdepending
on the magnitude and complexity of the application.
The term test scenario and test cases are used interchangeably however the main
difference being that test scenarios has several steps however test cases have a single
step. When viewed from this perspective test scenarios are test cases, but they include
several test cases and the sequence that they should be executed. Apart from this, each
test is dependent on the output from the previous test.
Test Case
Test cases involve the set of steps, conditions and inputs which can be used while
performing the testing tasks. The main intent of this activity is to ensure whether the
Software Passes or Fails in terms of its functionality and other aspects. There are many
types of test cases like:
functional, negative, error,
logical test cases, physical
test cases, UI test cases etc.
Page 19
Product version
Revision history
Purpose
Assumptions
Pre-Conditions.
Steps.
Expected Outcome.
Actual Outcome.
Post Conditions.
Many Test cases can be derived from a single test scenario. In addition to this, some
time it happened that multiple test cases are written for single Software which iscollectively
known as test suites.
Page 20
SOFTWARE TESTING
System Testing
Validation Testing
Integration Testing
Unit Testing
Code
Design
Requirements
System Engineering
LEVELS OF TESTING FOR
CONVENTIONAL SOFTWARE
• Unit testing
– Concentrates on each component/function of the software as implemented in
the source code
• Integration testing
– Focuses on the design and construction of the software architecture
• Validation testing
– Requirements are validated against the constructed software
• System testing
– The software and other system elements are tested as a whole
TESTING STRATEGY APPLIED TO
CONVENTIONAL SOFTWARE
• Unit testing
– Exercises specific paths in a component's control structure to ensure complete
coverage and maximum error detection
– Components are then assembled and integrated
• Integration testing
– Focuses on inputs and outputs, and how well the components fit together and
work together
• Validation testing
– Provides final assurance that the software meets all functional, behavioral,
and performance requirements
• System testing
– Verifies that all system elements (software, hardware, people, databases)
mesh properly and that overall system function and performance is achieved
TESTING STRATEGY APPLIED TO
OBJECT-ORIENTED SOFTWARE
• Must broaden testing to include detections of errors in analysis and design
models
• Unit testing loses some of its meaning and integration testing changes
significantly
• Use the same philosophy but different approach as in conventional
software testing
• Test "in the small" and then work out to testing "in the large"
– Testing in the small involves class attributes and operations; the main focus is
on communication and collaboration within the class
– Testing in the large involves a series of regression tests to uncover errors due
to communication and collaboration among classes
• Finally, the system as a whole is tested to detect errors in fulfilling
requirements
UNIT TESTING
• Three kinds
– Top-down integration
– Bottom-up integration
– Sandwich integration
• The program is constructed and tested in small increments
• Errors are easier to isolate and correct
• Interfaces are more likely to be tested completely
• A systematic test approach is applied
TOP-DOWN INTEGRATION
• Modules are integrated by moving downward through the control
hierarchy, beginning with the main module
• Subordinate modules are incorporated in either a depth-first or breadth-
first fashion
– DF: All modules on a major control path are integrated
– BF: All modules directly subordinate at each level are integrated
• Advantages
– This approach verifies major control or decision points early in the test process
• Disadvantages
– Stubs need to be created to substitute for modules that have not been built or
tested yet; this code is later discarded
– Because stubs are used to replace lower level modules, no significant data
flow can occur until much later in the integration/testing process
BOTTOM-UP INTEGRATION
• Integration and testing starts with the most atomic modules in the control
hierarchy
• Advantages
– This approach verifies low-level data processing early in the testing process
– Need for stubs is eliminated
• Disadvantages
– Driver modules need to be built to test the lower-level modules; this code is
later discarded or expanded into a full-featured version
– Drivers inherently do not contain the complete algorithms that will eventually
use the services of the lower-level modules; consequently, testing may be
incomplete or more testing may be needed later when the upper level
modules are available
SANDWICH INTEGRATION
• Consists of a combination of both top-down and bottom-up integration
• Occurs both at the highest level modules and also at the lowest level
modules
• Proceeds using functional groups of modules, with each group completed
before the next
– High and low-level modules are grouped based on the control and data
processing they provide for a specific program feature
– Integration within the group progresses in alternating steps between the high
and low level modules of the group
– When integration for a certain functional group is complete, integration and
testing moves onto the next group
• Reaps the advantages of both types of integration while minimizing the
need for drivers and stubs
• Re ui es a dis ipli ed app oa h so that i teg atio does ’t te d to a ds
the ig a g s e a io
REGRESSION TESTING
• Each new addition or change to baselined software may cause problems
with functions that previously worked flawlessly
• Regression testing re-executes a small subset of tests that have already
been conducted
– Ensures that changes have not propagated unintended side effects
– Helps to ensure that changes do not introduce unintended behavior or
additional errors
– May be done manually or through the use of automated capture/playback
tools
• Regression test suite contains three different classes of test cases
– A representative sample of tests that will exercise all software functions
– Additional tests that focus on software functions that are likely to be affected
by the change
– Tests that focus on the actual software components that have been changed
SMOKE TESTING
• Taken from the world of hardware
– Power is applied and a technician checks for sparks, smoke, or other
dramatic signs of fundamental failure
• Designed as a pacing mechanism for time-critical projects
– Allows the software team to assess its project on a frequent basis
• Includes the following activities
– The software is compiled and linked into a build
– A series of breadth tests is designed to expose errors that will keep the
build from properly performing its function
• The goal is to uncover sho stoppe errors that have the highest likelihood
of throwing the software project behind schedule
– The build is integrated with other builds and the entire product is smoke
tested daily
• Daily testing gives managers and practitioners a realistic assessment of the
progress of the integration testing
– After a smoke test is completed, detailed test scripts are executed
BENEFITS OF SMOKE TESTING
• Integration risk is minimized
– Daily testing uncovers incompatibilities and show-stoppers early in the
testing process, thereby reducing schedule impact
• The quality of the end-product is improved
– Smoke testing is likely to uncover both functional errors and architectural
and component-level design errors
• Error diagnosis and correction are simplified
– Smoke testing will probably uncover errors in the newest components
that were integrated
• Progress is easier to assess
– As integration testing progresses, more software has been integrated and
more has been demonstrated to work
– Managers get a good indication that progress is being made
TEST STRATEGIES FOR
OBJECT-ORIENTED SOFTWARE
• With object-oriented software, you can no longer test a single operation
in isolation (conventional thinking)
• Traditional top-down or bottom-up integration testing has little meaning
• Class testing for object-oriented software is the equivalent of unit testing
for conventional software
– Focuses on operations encapsulated by the class and the state behavior of
the class
• Drivers can be used
– To test operations at the lowest level and for testing whole groups of classes
– To replace the user interface so that tests of system functionality can be
conducted prior to implementation of the actual interface
• Stubs can be used
– In situations in which collaboration between classes is required but one or
more of the collaborating classes has not yet been fully implemented
TEST STRATEGIES FOR OBJECT-
ORIENTED SOFTWARE (CONTINUED)
• Two different object-oriented testing strategies
– Thread-based testing
• Integrates the set of classes required to respond to one input or event for the
system
• Each thread is integrated and tested individually
• Regression testing is applied to ensure that no side effects occur
– Use-based testing
• First tests the independent classes that use very few, if any, server classes
• Then the next layer of classes, called dependent classes, are integrated
• This sequence of testing layer of dependent classes continues until the entire
system is constructed
VALIDATION TESTING
• Validation testing follows integration testing
• The distinction between conventional and object-oriented software disappears
• Focuses on user-visible actions and user-recognizable output from the system
• Demonstrates conformity with requirements
• Designed to ensure that
– All functional requirements are satisfied
– All behavioral characteristics are achieved
– All performance requirements are attained
– Documentation is correct
– Usability and other requirements are met (e.g., transportability, compatibility, error
recovery, maintainability)
• After each validation test
– The function or performance characteristic conforms to specification and is accepted
– A deviation from specification is uncovered and a deficiency list is created
• A configuration review or audit ensures that all elements of the software
configuration have been properly developed, cataloged, and have the necessary
detail for entering the support phase of the software life cycle
ALPHA AND BETA TESTING
• Alpha testing
– Co du ted at the de elope ’s site e d use s
– Software is used in a natural setting with developers watching intently
– Testing is conducted in a controlled environment
• Beta testing
– Conducted at end-user sites
– Developer is generally not present
– It serves as a live application of the software in an environment that cannot be
controlled by the developer
– The end-user records all problems that are encountered and reports these to
the developers at regular intervals
• After beta testing is complete, software engineers make software
modifications and prepare for release of the software product to the
entire customer base
SYSTEM TESTING
• Recovery testing
– Tests for recovery from system faults
– Forces the software to fail in a variety of ways and verifies that recovery is
properly performed
– Tests reinitialization, checkpointing mechanisms, data recovery, and restart
for correctness
• Security testing
– Verifies that protection mechanisms built into a system will, in fact, protect it
from improper access
• Stress testing
– Executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume
• Performance testing
– Tests the run-time performance of software within the context of an
integrated system
– Often coupled with stress testing and usually requires both hardware and
software instrumentation
– Can uncover situations that lead to degradation and possible system failure