ST1 PDF
ST1 PDF
ST1 PDF
Lajpat Nagar
BCA
Sem V
SOFTWARE TESTING
UNIT 1
Software Tester
Software Developer
Project Lead/Manager End User
Different companies have difference designations for people who test the
software on the basis of their experience and knowledge such as Software
Tester, Software Quality Assurance Engineer, and QA Analyst etc.
It is not possible to test the software at any time during its cycle. The
next two sections state when testing should be started and when to end
it during the SDLC.
An early start to testing reduces the cost, time to rework and error free
software that is delivered to the client. However in Software Development
Life Cycle (SDLC) testing can be started from the Requirements Gathering
phase and lasts till the deployment of the software. However it also
depends on the development model that is being used. For example in
Water fall model formal testing is conducted in the Testing phase, but in
incremental model, testing is performed at the end of every
increment/iteration and at the end the whole application is tested.
Testing is done in different forms at every phase of SDLC like during Requirement
gathering phase, the analysis and verifications of requirements are also
considered testing. Reviewing the design in the design phase with intent to
improve the design is also considered as testing. Testing performed by a
developer on completion of the code is also categorized as Unit type of testing.
Testing Deadlines.
Bug rate falls below a certain level and no high priority bugs are
identified.
Management decision.
Testing Myths
Given below are some of the more popular and common myths about
Software testing.
Reality: There is a saying, pay less for testing during software development or
pay morefor maintenance or correction later. Early testing saves both time and
cost in many aspects however, reducing the cost without testing may result in
the improper design of a software application rendering the product useless.
Reality: It is not a correct approach to blame testers for bugs that remain in the
application even after testing has been performed. This myth relates to Time,
Cost, and Requirements changing Constraints. However the test
strategy may also result in bugs being missed by the testing team.
the bug or release the software. Releasing the software at the time puts
more pressure on the testers as they will be blamed for any error.
Reality: Yes it is true that Test Automation reduces the testing time but
it is not possible to start Test Automation at any time during Software
development. Test Automaton should be started when the software has
been manually tested and is stable to some extent. Moreover, Test
Automation can never be used if requirements keep changing.
Reality: People outside the IT industry think and even believe that any
one can test the software and testing is not a creative job. However
testers know very well that this is myth. Thinking alternatives scenarios,
try to crash the Software with the intent to explore potential bugs is not
possible for the person who developed it.
Reality: Finding bugs in the Software is the task of testers but at the same time
they are domain experts of the particular software. Developers are only
responsible for the specific component or area that is assigned to them but
testers understand the overall workings of the software, what the dependencies
are and what the impacts of one module on another module are.
While determining the coverage the test cases should be designed well
with maximum possibilities of finding the errors or bugs. The test cases
should be very effective. This objective can be measured by the number
of defects reported per test cases. Higher the number of the defects
reported the more effective are the test cases.
Once the delivery is made to the end users or the customers they should be
able to operate it without any complaints. In order to make this happen the
tester should know as how the customers are going to use this product and
accordingly they should write down the test scenarios and design the test
cases. This will help a lot in fulfilling all the customer’s requirements.
Software testing makes sure that the testing is being done properly and hence
the system is ready for use. Good coverage means that the testing has been
done to cover the various areas like functionality of the application,
compatibility of the application with the OS, hardware and different types of
browsers, performance testing to test the performance of the application and
load testing to make sure that the system is reliable and should not crash or
there should not be any blocking issues. It also determines that the application
can be deployed easily to the machine and without any resistance. Hence the
application is easy to install, learn and use.
When actual result deviates from the expected result while testing a
software application or product then it results into a defect. Hence, any
deviation from the specification mentioned in the product functional
specification document is a defect. In different organizations it’s called
differently like bug, issue, incidents or problem.
When the result of the software application or product does not meet with the
end user expectations or the software requirements then it results into a Bug or
Defect. These defects or bugs occur because of an error in logic or in coding
which results into the failure or unpredicted or unanticipated results.
Additional Information about Defects / Bugs:
When a tester finds a bug or defect it’s required to convey the same to
the developers. Thus they report bugs with the detail steps and are
called as Bug Reports, issue report, problem report, etc.
Since we assume that our work may have mistakes, hence we all need to check
our own work. However some mistakes come from bad assumptions and blind
spots, so we might make the same mistakes when we check our own work as we
made when we did it. So we may not notice the flaws in what we have done.
Software testing is really required to point out the defects and errors
that were made during the development phases.
It’s essential since it makes sure of the Customer’s reliability and
their satisfaction in the application.
It is very important to ensure the Quality of the product. Quality
product delivered to the customers helps in gaining their confidence.
Testing is necessary in order to provide the facilities to the
customers like the delivery of high quality product or software
application which requires lower maintenance cost and hence
results into more accurate, consistent and reliable results.
Testing is required for an effective performance of software
application or product.
It’s important to ensure that the application should not result into any
failures because it can be very expensive in the future or in the later
stages of the development.
It’s required to stay in the business.
Principles of Testing
1) Testing shows presence of defects: Testing can show the defects are
present, but cannot prove that there are no defects. Even after testing the
application or product thoroughly we cannot say that the product is 100% defect
free. Testing always reduces the number of undiscovered defects remaining in
the software but even if no defects are found, it is not a proof of correctness.
2) Exhaustive testing is impossible: Testing everything including all
combinations of inputs and preconditions is not possible. So, instead of doing
the exhaustive testing we can use risks and priorities to focus testing efforts.
For example: In an application in one screen there are 15 input fields, each
having 5 possible values, then to test all the valid combinations you would
15
need 30 517 578 125 (5 ) tests. This is very unlikely that the project
timescales would allow for this number of tests. So, accessing and managing
risk is one of the most important activities and reason for testing in any project.
3) Early testing: In the software development life cycle testing activities should
start as early as possible and should be focused on defined objectives.
5) Pesticide paradox: If the same kinds of tests are repeated again and
again, eventually the same set of test cases will no longer be able to find
any new bugs. To overcome this “Pesticide Paradox”, it is really very
important to review the test cases regularly and new and different tests
need to be written toEXERCISE different parts of the software or system
to potentially find more defects.
∑ Requirement Analysis
∑ Test Planning
∑ Test Case Development
∑ Environment Setup
∑ Test Execution
∑ Test Cycle Closure
Ideally, the next step is based on previous step or we can say next step
cannot be started unless and until previous step is completed. It is
possible in the ideal situation, but practically it is not always true.
So Let’s discuss what all activities and deliverable are involved in the
each step in detailed.
Requirement Analysis:
From very first step QA involved in the where STLC which helps to
prevent the introducing defects into Software under test. The
requirements can be either Functional or Non-Functional like
Performance, Security testing. Also requirement and Automation
feasibility of the project can be done in this stage (if applicable)
Entry Criteria Activities Deliverable
Following documents Prepare the list of questions List of questions with all
should be available: or queries and get resolved answers to be resolved from
from Business Analyst, business i.e. testable
System Architecture, requirements
– Requirements
Client, Technical
Specification.
Manager/Lead etc.
Automation feasibility
report (if applicable)
– Application architectural
Make out the list for what
all Types of Tests
Along with above performed like Functional,
documents Acceptance Security, and Performance
criteria should be well etc.
defined.
Define the testing focus
and priorities.
Test Planning:
Test Planning is most important phase of Software testing life cyclewhere all
testing strategy is defined. This phase also called as Test Strategy phase. In
this phase typically Test Manager (or Test Lead based on company to
company) involved to determine the effort and cost estimates for entire project.
This phase will be kicked off once the requirement gathering phase is
completed & based on the requirement analysis, start preparing the Test Plan.
The Result of Test Planning phase will be Test Plan or Test strategy & Testing
Effort estimation documents. Once test planning phase is completed the QA
team can start with test cases development activity.
Get the Sample Test Plan Template here.
Requirements Documents Define Objective & scope Test Plan or Test strategy
(Updated version of unclear of the project.List down the document.
or missing requirement). testing types involved in the
STLC.Test effort estimation
Testing Effort
and resource
Automation feasibility estimationdocument.
planning.Selection of
report.
Testing tool if required.
The test case development activity is started once the test planning
activity is finished. This is the phase of STLC where testing team write
down the detailed test cases. Along with test cases testing team also
prepare the test data if any required for testing. Once the test cases are
ready then these test cases are reviewed by peer members or QA lead.
Setting up the test environment is vital part of the STLC. Basically test
environment decides on which conditions software is tested. This is
independent activity and can be started parallel with Test Case Development.
In process of setting up testing environment test team is not involved in it.
Based on company to company may be developer or customer creates the
testing environment. Mean while testing team should prepare the smoke test
cases to check the readiness of the test environment setup.
Once the test case is passed then same can be marked as Passed. If any
test case is failed then corresponding defect can be reported to developer
team via bug tracking system & bug can be linked for corresponding test
case for further analysis. Ideally every failed test case should be associated
with at least single bug. Using this linking we can get the failed test case
with bug associated with it. Once the bug fixed by development team then
same test case can be executed based on your test planning.
If any of the test cases are blocked due to any defect then such test cases can
be marked as Blocked, so we can get the report based on how many test
cases passed, failed, blocked or not run etc. Once the defects are fixed, same
Failed or Blocked test cases can be executed again to retest the functionality.
Test Plan or Test strategy Based on test planning Test case execution report.
document. execute the test cases.
Defect report.
Test cases. Mark status of test cases
like Passed, Failed,
Blocked, Not Run etc.
Test data.
Call out the testing team member meeting & evaluate cycle
completion criteria based on Test coverage, Quality, Cost, Time,
Critical Business Objectives, and Software. Discuss what all went
good, which area needs to be improve & taking the lessons from
current STLC as input to upcoming test cycles, which will help to
improve bottleneck in the STLC process. Test case & bug report
will analyze to find out the defect distribution by type and severity.
Once complete the test cycle then test closure report & Test
metrics will be prepared. Test result analysis to find out the defect
distribution by type and severity.
UNIT 2
Manual Testing
This type includes the testing of the Software manually i.e. without using any
automated tool or any script. In this type the tester takes over the role of an end
user and test the Software to identify any un-expected behavior or bug. There
are different stages for manual testing like unit testing, Integration testing,
System testing and User Acceptance testing.
Testers use test plan, test cases or test scenarios to test the Software to ensure
the completeness of testing. Manual testing also includes exploratory testing as
testers explore the software to identify errors in it.
Automation Testing
Test
Script
Test
Execution
Test
Automation
Apart from regression testing, Automation testing is also used to test the
application from load, performance and stress point of view. It increases the test
coverage; improve accuracy, saves time and money in comparison to manual
testing.
Furthermore all GUI items, connections with databases, field validations etc.
can be efficiently tested by automating the manual process.
Testing Methods
There are different methods which can be used for Software testing. This
chapter briefly describes those methods.
The technique of testing without having any knowledge of the interior workings
of the application is Black Box testing. The tester is oblivious to the system
architecture and does not have access to the source code. Typically, when
performing a black box test, a tester will interact with the system‟s user
interface by providing inputs and examining outputs without knowing how and
where the inputs are worked upon.
Advantages:
Well suited and efficient for large code segments.
Code Access not required.
Clearly separates
user‟s perspective from the developer‟s perspective through visibly
defined roles.
Large numbers of moderately skilled testers can test the application with no
knowledge of implementation, programming language or operating
systems.
Disadvantages:
Coverage since only a selected number of test scenarios are actually
Limited
performed.
due to the fact that the tester only has limited knowledge
Inefficient testing,
about an application.
Coverage, since the tester cannot target specific code segments or error
Blind
prone areas.
The test cases are difficult to design.
White box testing is the detailed investigation of internal logic and structure of
the code. White box testing is also called glass testing or open box testing. In
order to perform
Advantages:
As the tester has knowledge of the source code, it becomes very easy to find out
which type of data can help in testing the application effectively.
It helps in optimizing the code.
Extra lines of code can be removed which can bring in hidden defects.
Due to the tester's knowledge about the code, maximum coverage is attained during
test scenario writing.
Disadvantages:
Due to the
fact that a skilled tester is needed to perform white box testing, the costs are
increased.
Sometimes it is impossible to look into every nook and corner to find out
hidden errors that may create problems as many paths will go untested.
It is difficult to maintain white box testing
as the use of specialized tools like code
analyzers and debugging tools are required.
Grey Box testing is a technique to test the application with limited knowledge
of the internal workings of an application. In software testing, the term “the
more you know the better” carries a lot of weight when testing an application.
Mastering the domain of a system always gives the tester an edge over someone
with limited domain knowledge. Unlike black box testing, where the tester only
tests the application‟s user interface, in grey box testing, the tester has access to
design documents and the database. Having this knowledge, the tester is able to
better prepare test data and test scenarios when making the test plan.
Advantages:
Offers combined benefits of black box and white box testing wherever possible.
Grey box testers don‟t rely on the source code; instead they rely on interface
definition and functional specifications.
Based on the limited information available, a grey box tester can design
especially around communication protocols and
excellent test scenarios
data type handling.
The test is done from the point of view of the user and not the designer.
Disadvantages:
code is not available, the ability to go over the code and test
Since the access to source
coverage is limited.
The tests can be redundant if the software designer has already run a test case.
Testing every possible input stream is unrealistic because it would take an
amount of time; therefore, many program paths will go
unreasonable
untested.
Internals
Internals
Fully
Partially
Known
Known
There are different levels during the process of Testing. In this chapter a brief
description is provided about these levels.
Levels of testing include the different methodologies that can be used while
conducting Software Testing. Following are the main levels of Software
Testing:
Functional Testing.
Non- functional Testing.
Functional Testing
This is a type of black box testing that is based on the specifications of the
software that is to be tested. The application is tested by providing input and
then the results are examined that need to conform to the functionality it was
intended for. Functional Testing of the software is conducted on a complete,
integrated system to evaluate the system's compliance with its specified
requirements. There are five steps that are involved when testing an application
for functionality.
Step I - The determination of the functionality that the intended
application is meant to perform.
Step II - The creation of test data based on the specifications of the
application.
output based on the test data and the specifications of the
Step III - The
application.
Step IV - The writing of Test Scenarios and the execution of test cases.
Steps V - The comparison of actual and expected results based on the
executed test cases.
An effective testing practice will see the above steps applied to the testing
policies of every organization and hence it will make sure that the organization
maintains the strictest of standards when it comes to software quality.
Unit Testing
This type of testing is performed by the developers before the setup is handed
over to the testing team to formally execute the test cases. Unit testing is
performed by the respective developers on the individual units of source code
assigned areas. The developers use test data that is separate from the test data of
the quality assurance team.
The goal of unit testing is to isolate each part of the program and show that
individual parts are correct in terms of requirements and functionality.
Integration Testing
Bottom-up integration testing begins with unit testing, followed by
tests of progressively higher-level combinations of units called modules
or builds.
Top-Down integration testing, the highest-level modules are tested first
and progressively lower-level modules are tested after that. In a
comprehensive software development environment, bottom-up testing is
usually done first, followed by top-down testing.
System Testing
This is the next level in the testing and tests the system as a whole. Once all the
components are integrated, the application as a whole is tested rigorously to see
that it meets Quality Standards. This type of testing is performed by a
specialized testing team.
System Testing is the first step in the Software Development Life
Cycle, where the application is tested as a whole.
The application is tested thoroughly to verify that it meets the
functional and technical specifications.
The application is tested in an environment which is very closeto the
production environment where the application will be deployed.
System Testing enables us to test, verify and validate
both the business
requirements as well as the Applications Architecture.
Regression Testing
Whenever a change in a software application is made it is quite possible that
other areas within the application have been affected by this change. To verify
that a fixed bug hasn‟t resulted in another functionality or business rule
violation is Regression testing. The intent of Regression testing is to ensure that
a change, such as a bug fix did not result in another fault being uncovered in the
application.
Minimize the gaps in testing when an application with changes made
has to betested.
Testing the new changes to verify that the change made did not affect
anyother area of the application.
Mitigates Risks when regression testing is performed on the
application.
Test coverage is increased without compromising timelines.
Increase speed to market the product.
Acceptance Testing
More ideas will be shared about the application and more tests can be
performed on it to gauge its accuracy and the reasons why the project was
initiated. Acceptance tests are not only intended to point out simple spelling
mistakes, cosmetic errors or Interface gaps, but also to point out any bugs in the
application that will result in system crashers or major errors in the application.
Alpha Testing
This test is the first stage of testing and will be performed amongst the teams
(developer and QA teams). Unit testing, integration testing and system testing
when combined are known as alpha testing. During this phase, the following
will be tested in the application:
Spelling Mistakes
Broken Links
Cloudy Directions
The Application will be tested on machines withthe lowest specification
to test loading times and any latency problems.
Beta Testing
This test is performed after Alpha testing has been successfully performed. In
beta testing a sample of the intended audience tests the application. Beta testing
is also known as pre-release testing. Beta test versions of software are ideally
distributed to a wide audience on the Web, partly to give the program a "real-
world" test and partly to provide a preview of the next release. In this phase the
audience will be testing the following:
Users will install, run the application and send their feedback to the project team.
Typographical errors, confusing application flow, and even crashes.
Getting the feedback, the project team can fix the problems before releasing the
software to the actual users.
The more issues you fix that solve real user problems, the higher the quality of your
application will be.
Having a higher-quality application when you release to the general public will
increase customer satisfaction.
Non-Functional Testing
This section is based upon the testing of the application from its non-functional
attributes. Non-functional testing of Software involves testing the Software
from the requirements which are non-functional in nature related but important
a well such as performance, security, user interface etc. Some of the important
and commonly used non-functional testing types are mentioned as follows.
Performance Testing
Network delay.
Client side processing.
Database transaction processing.
Load balancing between servers.
Data rendering.
Stability
Scalability
Load Testing
Most of the time, Load testing is performed with the help of automated tools
such as Load Runner, AppLoader, IBM Rational Performance Tester, Apache
JMeter, Silk Performer, Visual Studio Load Test etc.
Virtual users (VUsers) are defined in the automated testing tool and the script is
executed to verify the Load testing for the Software. The quantity of users can
be increased or decreased concurrently or incrementally based upon the
requirements.
Stress Testing
This testing type includes the testing of Software behavior under abnormal
conditions. Taking away the resources, applying load beyond the actual load
limit is Stress testing.
The main intent is to test the Software by applying the load to the system and
taking over the resources used by the Software to identify the breaking point.
This testing can be performed by testing different scenarios such as:
Shutdown or restart of Network ports randomly.
Turning the database on or off.
Running different processes that consume resources such as CPU,
Memory, server etc.
Usability Testing
This requirement can be fulfilled and the end user will be satisfied if the
intended goals are achieved effectively with the use of proper resources.
Molich in 2000 stated that user friendly system should fulfill the following five
goals i.e.
On the other hand Usability testing ensures that a good and user friendly GUI is
designed and is easy to use for the end user. UI testing can be considered as a
sub part of Usability testing.
Security Testing
Security testing involves the testing of Software in order to identify any flaws
ad gaps from security and vulnerability point of view. Following are the main
aspects which Security testing should ensure:
Confidentiality.
Integrity.
Authentication.
Availability.
Authorization.
Non-repudiation.
Software is secure against known and unknown vulnerabilities.
Software data is secure.
Software is according to all security regulations.
Input checking and validation.
SQL insertion attacks.
Injection flaws.
Session management issues.
Cross-site scripting attacks.
Buffer overflows vulnerabilities.
Directory traversal attacks.
Portability Testing
Portability testing includes the testing of Software with intend that it should be re-useable and
can be moved from another Software as well. Following are the strategies that can be used for
Portability testing.
Transferred installed Software from one computer to another.
Building executable (.exe) to run the Software on different platforms.
Portability testing can be considered as one of the sub parts of System testing, as this testing type
includes the overall testing of Software with respect to its usage over different environments.
Computer Hardware, Operating Systems and Browsers are the major focus of Portability testing.
Following are some pre-conditions for Portability testing:
Software should be designed and coded, keeping in mind Portability Requirements.
Unit testing has been performed on the associated components.
Integration testing has been performed.
Test environment has been established.
Basis Path Testing : Basis path testing is a white box testing technique first proposed by Tom
McCabe. The Basis path method enables to derive a logical complexity measure of a procedural
design and use this measure as a guide for defining a basis set of execution paths. Test Cases
derived to exercise the basis set are guaranteed to execute every statement in the program at least
one time during testing.
Flow Graph Notation: The flow graph depicts logical control flow using a diagrammatic
notation. Each structured construct has a corresponding flow graph symbol.
On a flow graph:
Note that compound boolean expressions at tests generate at least two predicate node and
additional arcs.
Example:
Cyclomatic Complexity: is a software metric that provides a quantitative measure of the logical
complexity of a program. When used in the context of a basis path testing method, the value
computed for Cyclomatic complexity defines the number for independent paths in the basis set of
a program and provides us an upper bound for the number of tests that must be conducted to
ensure that all statements have been executed at least once.
An independent path is any path through the program that introduces at least one new set of
processing statements or a new condition.
Cyclomatic complexity has a foundation in graph theory and provides us with extremely useful
software metric. Complexity is computed in one of the three ways:
1. The number of regions of the flow graph corresponds to the Cyclomatic complexity.
This value gives the number of independent paths in the basis set, and an upper bound for the
number of tests to ensure that each statement and both sides of every condition is executed at
least once.
An independent path is any path through a program that introduces at least one new set of
processing statements (i.e., a new node) or a new condition (i.e., a new edge)
1: WHILE NOT EOF LOOP
2: Read Record;
2: IF field1 equals
0 THEN
3: Add field1 to
Total
3: Increment
Counter
4: ELSE
4: IF field2
equals 0 THEN
5: Print
Total, Counter
5: Reset
Counter
6: ELSE
6: Subtract
field2 from Total
7: END IF
8: END IF
8: Print "End
Record"
9: END LOOP
9: Print Counter
<=""
p="">
Example has:
Independent Paths:
1. 1, 9
2. 1, 2, 3, 8, 1, 9
3. 1, 2, 4, 5, 7, 8, 1, 9
4. 1, 2, 4, 6, 7, 8, 1, 9
Cyclomatic Complexity of 4; computed using any of these 3 formulas:
1. #Edges - #Nodes + #terminal vertices (usually 2)
2. #Predicate Nodes + 1
3. Number of regions of flow graph.
Cyclomatic complexity provides upper bound for number of tests required to guarantee
coverage of all program statements.
Note: some paths may only be able to be executed as part of another test.
Graph Matrices: The procedure for deriving the flow graph and even determining a set of basis paths is
amenable to mechanization. To develop a software tool that assists in basis path testing, a data structure,
called a graph matrix can be quite useful.
A Graph Matrix is a square matrix whose size is equal to the number of nodes on the flow graph. Each
row and column corresponds to an identified node, and matrix entries correspond to connections between
nodes.
A Graph Matrix is a square matrix whose size is equal to the number of nodes on the flow graph.
Each row and column corresponds to an identified node, and matrix entries correspond to
connections between nodes. The connection matrix can also be used to find the cyclomatic
complexity
Control Structure Testing: Described below are some of the variations of Control Structure Testing:
Condition Testing: Condition testing is a test case design method that exercises the logical conditions
contained in a program module.
Data Flow Testing: The data flow testing method selects test paths of a program according to the
locations of definitions and uses of variables in the program.
Loop Testing: Loop Testing is a white box testing technique that focuses exclusively on the validity of
loop constructs. Four classes of loops can be defined: Simple loops, Concatenated loops, nested loops,
and unstructured loops.
Simple Loops: The following sets of tests can be applied to simple loops, where „n‟ is the maximum
number of allowable passes through the loop.
Nested Loops: If we extend the test approach from simple loops to nested loops, the number of possible
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimum
iteration parameter values. Add other tests for out-of-range or exclude values.
3. Work outward, conducting tests for the next loop, but keep all other outer loops at minimum values and
other nested loops to "typical" values.
Concatenated Loops: Concatenated loops can be tested using the approach defined for simple loops, if
each of the loops is independent of the other. However, if two loops are concatenated and the loop counter
for loop 1 is used as the initial value for loop 2, then the loops are not independent.
Unstructured Loops: Whenever possible, this class of loops should be redesigned to reflect the use of
the structured programming constructs.
What is Black Box Testing?
Firstly we will learn what is Black Box Testing? Here we will discuss about how black box
testing is perform, different BBT Techniques used in testing.
Black box testing is the Software testing method which is used to test the software without
knowing the internal structure of code or program.
Most likely this testing method is what most of tester actual perform and used the majority in the
practical life.
Basically software under test is called as “Black-Box”, we are treating this as black box &
without checking internal structure of software we test the software. All testing is done as
customer‟s point of view and tester is only aware of what is software is suppose to do but how
these requests are processing by software is not aware. While testing tester is knows about the
input and expected output‟s of the software and they do not aware of how the software or
application actually processing the input requests & giving the outputs. Tester only passes valid
as well as invalid inputs & determines the correct expected outputs. All the test cases to test
using such method are calculated based on requirements & specifications document.
The main purpose of the Black Box is to check whether the software is working as per expected
in requirement document & whether it is meeting the user expectations or not.
There are different types of testing used in industry. Each testing type is having its own
advantages & disadvantages. So fewer bugs cannot be find using the black box testing or white
box testing.
Types of Black Box Testing Techniques: Following black box testing techniques are used for
testing the software application.
Boundary value analysis and Equivalence Class Partitioning both are test case design techniques in
black box testing
Boundary Value Analysis is the most commonly used test case design method for black box
testing. As all we know the most of errors occurs at boundary of the input values. This is one of
the techniques used to find the error in the boundaries of input values rather than the center of the
input value range.
Boundary Value Analysis is the next step of the Equivalence class in which all test cases
are design at the boundary of the Equivalence class.
Suppose we have software application which accepts the input value text box ranging from 1
to 1000, in this case we have invalid and valid inputs:
Here are the Test cases for input box accepting numbers using Boundary value analysis:
Min value – 1 0
Min Value 1
Min value + 1 2
Normal Value 1 – 1000
Max value – 1 999
Max value 1000
Max value +1 1001
This is testing techniques is not applicable only if input value range is not fixed i.e. the boundary
of input is not fixed.
The equivalence class partition is the black box test case design technique used for writing test
cases. This approach is use to reduce huge set of possible inputs to small but equally effective
inputs. This is done by dividing inputs into the classes and gets one value from each class. Such
method is used when exhaustive testing is most wanted & to avoid the redundancy of inputs.
In the equivalence partitioning input are divided based on the input values:
If input value is Range, then we one valid equivalence class & two invalid equivalence
classes.
If input value is specific set, then we one valid equivalence class & one invalid
equivalence classes.
If input value is number, then we one valid equivalence class & two invalid equivalence
classes.
If input value is Boolean, then we one valid equivalence class & one invalid
equivalence classes.
Equivalence partitioning is a Test Case Design Technique to divide the input data of software
into different equivalence data classes. Test cases are designed for equivalence data class. The
equivalence partitions are frequently derived from the requirements specification for input data
that influence the processing of the test object. A use of this method reduces the time necessary
for testing software using less and effective test cases.
It can be used at any level of software for testing and is preferably a good technique to use first.
In this technique, only one condition to be tested from each partition. Because we assume that,
all the conditions in one partition behave in the same manner by the software. In a partition, if
one condition works other will definitely work. Likewise we assume that, if one of the condition
does not work then none of the conditions in that partition will work.
Equivalence partitioning is a testing technique where input values set into classes for testing.
While evaluating Equivalence partitioning, values in all partitions are equivalent that‟s why 0-
5 are equivalent, 6 – 10 are equivalent and 11- 14 are equivalent.
At the time of testing, test 4 and 12 as invalid values and 7 as valid one.
It is easy to test input ranges 6–10 but harder to test input ranges 2-600. Testing will be easy
in the case of lesser test cases but you should be very careful. Assuming, valid input is 7. That
means, you belief that the developer coded the correct valid range (6-10).
Boundary value analysis is a test case design technique to test boundary value between
partitions (both valid boundary partition and invalid boundary partition). A boundary value is an
input or output value on the border of an equivalence partition, includes minimum and
maximum values at inside and outside boundaries. Normally Boundary value analysis is part of
stress and negative testing.
Using Boundary Value Analysis technique tester creates test cases for required input field. For
example; an Address text box which allows maximum 500 characters. So, writing test cases for
each character once will be very difficult so that will choose boundary value analysis.
Example 1
Suppose you have very important tool at office, accepts valid User Name and Password field to
work on that tool, and accepts minimum 8 characters and maximum 12 characters. Valid range
8-12, Invalid range 7 or less than 7 and invalid range 13 or more than 13.
Write Test Cases for Valid partition value, Invalid partition value and exact boundary value.
Test cases for the application whose input box accepts numbers between 1-1000. Valid range
1-1000, Invalid range 0 and Invalid range 1001 or more.
Write Test Cases for Valid partition value, Invalid partition value and exact boundary value.
Test Cases 1: Consider test data exactly as the input boundaries of input domain i.e. values 1
and 1000.
Test Cases 2: Consider test data with values just below the extreme edges of input domains i.e.
values 0 and 999.
Test Cases 3: Consider test data with values just above the extreme edges of input domain i.e.
values 2 and 1001.
Combinational test technique, as the name suggests, is a technique of combining the data /
entities as input parameters for testing, to increase the scope. This technique is beneficial when
we have to test with huge number data having many permutations and combinations.
The beauty of this technique is that, it maximizes the coverage by comparatively lesser number
of test cases. The pairs of parameters which are identified should be independent of each other.
It‟s a black box technique, so like other BB technique; we don‟t need to have the implementation
knowledge of the system. The point here is to identify the correct pair of input parameters.
There are many technique of CTD, where OATS (Orthogonal array testing technique) is
widely used.
How to use OATS?
Let us consider you have to identify the test cases for a Web Page that has 5 sections: Headlines,
details, references and Comments, that can be displayed or not displayed or show Error message.
You are required to design the test condition to test the interaction between different sections.
In this case:
2. Value that each variable can take = 3values( displayed, not displayed and error message)
4
3. Orthogonal array would be 3 .
4. Google and find an appropriate array for 4 factors and 3 levels. For this example, I am
referencing the bellow table
5. Now, map this array with our requirements as below:
Based on the table above, design your test cases. Also look out for the special test cases / left
over test cases.
Conclusion:
None of the testing technique provides a guarantee of 100% coverage. Each technique has its
own way of selecting the test conditions. In the similar lines, there are some limitations of using
this technique:
DEFINITION
A test case is a set of conditions or variables under which a tester will determine
whether a system under test satisfies requirements or works correctly.
The process of developing test cases can also help find problems in the
requirements or design of an application.
A test case can have the following elements. Note, however, that normally a test
management tool is used by companies and the format is determined by the tool
used.
Test Suite ID The ID of the test suite to which this test case belongs.
Test Case ID The ID of the test case.
Test Case
The summary / objective of the test case.
Summary
Related
The ID of the requirement this test case relates/traces to.
Requirement
Any prerequisites or preconditions that must be fulfilled prior to
Prerequisites
executing the test.
Test Procedure Step-by-step procedure to execute the test.
The test data, or links to the test data, that are to be used while
Test Data
conducting the test.
Expected Result The expected result of the test.
Actual Result The actual result of the test; to be filled after executing the test.
Pass or Fail. Other statuses can be ‘Not Executed’ if testing is not
Status
performed and ‘Blocked’ if testing is blocked.
Remarks Any comments on the test case or test execution.
Created By The name of the author of the test case.
Date of
The date of creation of the test case.
Creation
Executed By The name of the person who executed the test.
Date of
The date of execution of the test.
Execution
Test The environment (Hardware/Software/Network) in which the test
Environment was executed.
Status Fail
Remarks This is a sample test case.
Created By John Doe
Date of
01/14/2020
Creation
Executed By Jane Roe
Date of
02/16/2020
Execution
As far as possible, write test cases in such a way that you test only one thing
at a time. Do not overlap or complicate test cases. Attempt to make your test
cases ‘atomic’.
Ensure that all positive scenarios and negative scenarios are covered.
Language:
o Write in simple and easy to understand language.
o Use active voice: Do this, do that.
Use exact and consistent names (of forms, fields, etc).
Characteristics of a good test case:
Accurate: Exacts the purpose.
o Economical: No unnecessary steps or words.
o Traceable: Capable of being traced to requirements.
o Repeatable: Can be used to perform the test over and over.
o Reusable: Can be reused if necessary.
Test Strategy
The first step in planning white box testing is to develop a test strategy based on
risk analysis. The purpose of a test strategy is to clarify the major activities
involved, key decisions made, and challenges faced in the testing effort. This
includes identifying testing scope, testing techniques, coverage metrics, test
environment, and test staff skill requirements. The test strategy must account for
the fact that time and budget constraints prohibit testing every component of a
software system and should balance test effectiveness with test efficiency based on
risks to the system. The level of effectiveness necessary depends on the use of
software and its consequence of failure. The higher the cost of failure for software,
the more sophisticated and rigorous a testing approach must be to ensure
effectiveness. Risk analysis provides the right context and information to derive a
test strategy.
Test strategy is essentially a management activity. A test manager (or similar role)
is responsible for developing and managing a test strategy. The members of the
development and testing team help the test manager develop the test strategy.
Test Plan
The test plan should manifest the test strategy. The main purpose of having a test
plan is to organize the subsequent testing process. It includes test areas covered,
test technique implementation, test case and data selection, test results validation,
test cycles, and entry and exit criteria based on coverage metrics. In general, the
test plan should incorporate both a high-level outline of which areas are to be
tested and what methodologies are to be used and a general description of test
cases, including prerequisites, setup, execution, and a description of what to look
for in the test results. The high-level outline is useful for administration, planning,
and reporting, while the more detailed descriptions are meant to make the test
process go smoothly.
While not all testers like using test plans, they provide a number of benefits:
A test manager (or similar role) is responsible for developing and managing a test
plan. The development managers are also part of test plan development, since the
schedules in the test plan are closely tied to that of the development schedules.
The last activity within the test planning stage is test case development. Test case
description includes preconditions, generic or specific test inputs, expected results,
and steps to perform to execute the test. There are many definitions and formats for
test case description. In general, the intent of the test case is to capture what the
particular test is designed to accomplish. Risk analysis, test strategy, and the test
plan should guide test case development. Testing techniques and classes of tests
applicable to white box testing are discussed later in Sections 4.2 and 4.3
respectively.
The members of the testing team are responsible for developing and documenting
test cases.
Test Automation
Test automation provides automated support for the process of managing and
executing tests, especially for repeating past tests. All the tests developed for the
system should be collected into a test suite. Whenever the system changes, the
suite of tests that correspond to the changes or those that represent a set of
regression tests can be run again to see if the software behaves as expected. Test
drivers or suite drivers support executing test suites. Test drivers basically help in
setup, execution, assertion, and teardown for each of the tests. In addition to
driving test execution, test automation requires some automated mechanisms to
generate test inputs and validate test results. The nature of test data generation and
test results validation largely depends on the software under test and on the testing
intentions of particular tests.
For security testing, it is often necessary for the tester to have more control over
the environment than in many other testing activities. This is because the tester
must be able to examine and manipulate software/environment interactions at a
greater level of detail, in search of weaknesses that could be exploited by an
attacker. The tester must also be able to control these interactions. The test
environment should be isolated, especially if, for example, a test technique
produces potentially destructive results during test that might invalidate the results
of any concurrent test or other activity. Testing malware (malicious software) can
also be dangerous without strict isolation.
Test Execution
Test execution involves running test cases developed for the system and reporting
test results. The first step in test execution is generally to validate the infrastructure
needed for running tests in the first place. This infrastructure primarily
encompasses the test environment and test automation, including stubs that might
be needed to run individual components, synthetic data used for testing or
populating databases that the software needs to run, and other applications that
interact with the software. The issues being sought are those that will prevent the
software under test from being executed or else cause it to fail for reasons not
related to faults in the software itself.
The members of the test team are responsible for test execution and reporting.
Testing Techniques
Data-Flow Analysis
The fault injection technique perturbs program states by injecting software source
code to force changes into the state of the program as it executes. Instrumentation
is the process of non-intrusively inserting code into the software that is being
analyzed and then compiling and executing the modified (or instrumented)
software. Assertions are added to the code to raise a flag when a violation
condition is encountered. This form of testing measures how software behaves
when it is forced into anomalous circumstances. Basically this technique forces
non-normative behavior of the software, and the resulting understanding can help
determine whether a program has vulnerabilities that can lead to security
violations. This technique can be used to force error conditions to exercise the
error handling code, change execution paths, input unexpected (or abnormal) data,
change return values, etc. In [Thompson 02], runtime fault injection is explained
and advocated over code-based fault injection methods. One of the drawbacks of
code based methods listed in the book is the lack of access to source code.
However, in this content area, the assumptions are that source code is available and
that the testers have the knowledge and expertise to understand the code for
security implications. Refer to [Voas 98] for a detailed understanding of software
fault injection concepts, methods, and tools.
Abuse Cases
Abuse cases help security testers view the software under test in the same light as
attackers do. Abuse cases capture the non-normative behavior of the system. While
in [McGraw 04c] abuse cases are described more as a design analysis technique
than as a white box testing technique, the same technique can be used to develop
innovative and effective test cases mirroring the way attackers would view the
system. With access to the source code, a tester is in a better position to quickly see
where the weak spots are compared to an outside attacker. The abuse case can also
be applied to interactions between components within the system to capture
abnormal behavior, should a component misbehave. The technique can also be
used to validate design decisions and assumptions. The simplest, most practical
method for creating abuse cases is usually through a process of informed
brainstorming, involving security, reliability, and subject matter expertise. Known
attack patterns form a rich source for developing abuse cases.
Classes of Tests
Creating security tests other than ones that directly map to security specifications is
challenging, especially tests that intend to exercise the non-normative or non-
functional behavior of the system. When creating such tests, it is helpful to view
the software under test from multiple angles, including the data the system is
handling, the environment the system will be operating in, the users of the software
(including software components), the options available to configure the system,
and the error handling behavior of the system. There is an obvious interaction and
overlap between the different views; however, treating each one with specific
focus provides a unique perspective that is very helpful in developing effective
tests.
Data
All input data should be untrusted until proven otherwise, and all data must be
validated as it crosses the boundary between trusted and untrusted environments
[Howard 02]. Data sensitivity/criticality plays a big role in data-based testing;
however, this does not imply that other data can be ignored—non-sensitive data
could allow a hacker to control a system. When creating tests, it is important to test
and observe the validity of data at different points in the software. Tests based on
data and data flow should explore incorrectly formed data and stressing the size of
the data. The section ”Attacking with Data Mutation” in [Howard 02] describes
different properties of data and how to mutate data based on given properties. To
understand different attack patterns relevant to program input, refer to chapter six,
”Crafting (Malicious) Input,” in [Hoglund 04]. Tests should validate data from all
channels, including web inputs, databases, networks, file systems, and environment
variables. Risk analysis should guide the selection of tests and the data set to be
exercised.
Fuzzing
Although normally associated exclusively with black box security testing, fuzzing
can also provide value in a white box testing program. Specifically, [Howard 06]
introduced the concept of “smart fuzzing.” Indeed, a rigorous testing program
involving smart fuzzing can be quite similar to the sorts of data testing scenarios
presented above and can produce useful and meaningful results as well. [Howard
06] claims that Microsoft finds some 25-25 percent of the bugs in their code via
fuzzing techniques. Although much of that is no doubt “dumb” fuzzing in black
box tests, “smart” fuzzing should also be strongly considered in a white box testing
program.
Environment
Software can only be considered secure if it behaves securely under all operating
environments. The environment includes other systems, users, hardware, resources,
networks, etc. A common cause of software field failure is miscommunication
between the software and its environment [Whittaker 02]. Understanding the
environment in which the software operates, and the interactions between the
software and its environment, helps in uncovering vulnerable areas of the system.
Understanding dependency on external resources (memory, network bandwidth,
databases, etc.) helps in exploring the behavior of the software under different
stress conditions. Another common source of input to programs is environment
variables. If the environment variables can be manipulated, then they can have
security implications. Similar conditions occur for registry information,
configuration files, and property files. In general, analyzing entities outside the
direct control of the system provides good insights in developing tests to ensure the
robustness of the software under test, given the dependencies.
Component Interfaces
Configuration
In many cases, software comes with various parameters set by default, possibly
with no regard for security. Often, functional testing is performed only with the
default settings, thus leaving sections of code related to non-default settings
untested. Two main concerns with configuration parameters with respect to
security are storing sensitive data in configuration files and configuration
parameters changing the flow of execution paths. For example, user privileges,
user roles, or user passwords are stored in the configuration files, which could be
manipulated to elevate privilege, change roles, or access the system as a valid user.
Configuration settings that change the path of execution could exercise vulnerable
code sections that were not developed with security in mind. The change of flow
also applies to cases where the settings are changed from one security level to
another, where the code sections are developed with security in mind. For example,
changing an endpoint from requiring authentication to not requiring authentication
means the endpoint can be accessed by everyone. When a system has multiple
configurable options, testing all combinations of configuration can be time
consuming; however, with access to source code, a risk-based approach can help in
selecting combinations that have higher probability in exposing security violations.
In addition, coverage analysis should aid in determining gaps in test coverage of
code paths.
Error handling
The most neglected code paths during the testing process are error handling
routines. Error handling in this paper includes exception handling, error recovery,
and fault tolerance routines. Functionality tests are normally geared towards
validating requirements, which generally do not describe negative (or error)
scenarios. Even when negative functional tests are created, they don’t test for non-
normative behavior or extreme error conditions, which can have security
implications. For example, functional stress testing is not performed with an
objective to break the system to expose security vulnerability. Validating the error
handling behavior of the system is critical during security testing, especially
subjecting the system to unusual and unexpected error conditions. Unusual errors
are those that have a low probability of occurrence during normal usage.
Unexpected errors are those that are not explicitly specified in the design
specification, and the developers did not think of handling the error. For example,
a system call may throw an ”unable to load library” error, which may not be
explicitly listed in the design documentation as an error to be handled. All aspects
of error handling should be verified and validated, including error propagation,
error observability, and error recovery. Error propagation is how the errors are
propagated through the call chain. Error observability is how the error is identified
and what parameters are passed as error messages. Error recovery is getting back
to a state conforming to specifications. For example, return codes for errors may
not be checked, leading to uninitialized variables and garbage data in buffers; if the
memory is manipulated before causing a failure, the uninitialized memory may
contain attacker-supplied data. Another common mistake to look for is when
sensitive information is included as part of the error messages.
Tools
Source code analysis is the process of checking source code for coding problems
based on a fixed set of patterns or rules that might indicate possible security
vulnerabilities. Static analysis tools scan the source code and automatically detect
errors that typically pass through compilers and become latent problems. The
strength of static analysis depends on the fixed set of patterns or rules used by the
tool; static analysis does not find all security issues. The output of the static
analysis tool still requires human evaluation. For white box testing, the results of
the source code analysis provide a useful insight into the security posture of the
application and aid in test case development. White box testing should verify that
the potential vulnerabilities uncovered by the static tool do not lead to security
violations. Some static analysis tools provide data-flow and control-flow analysis
support, which are useful during test case development.
A detailed discussion about the analysis or the tools is outside the scope of this
content area. This section addresses how source code analysis tools aid in white
box testing. For a more detailed discussion on the analysis and the tools, refer to
the Source Code Analysis content area.
In general, white box testers should have access to the same tools, documentation,
and environment as the developers and functional testers on the project do. In
addition, tools that aid in program understanding, such as software visualization,
code navigation, debugging, and disassembly tools, greatly enhance productivity
during testing.
Coverage Analysis
Code coverage tools measure how thoroughly tests exercise programs. There are
many different coverage measures, including statement coverage, branch coverage,
and multiple-condition coverage. The coverage tool would modify the code to
record the statements executed and which expressions evaluate which way (the true
case or the false case of the expression). Modification is done either to the source
code or to the executable the compiler generates. There are several commercial and
freeware coverage tools available. Coverage tool selection should be based on the
type of coverage measure selected, the language of the source code, the size of the
program, and the ease of integration into the build process.
Profiling
Profiling allows testers to learn where the software under test is spending most of
its time and the sequence of function calls as well. This information can show
which pieces of the software are slower than expected and which functions are
called more often than expected. From a security testing perspective, the
knowledge of performance bottlenecks help uncover vulnerable areas that are not
apparent during static analysis. The call graph produced by the profiling tool is
helpful in program understanding. Certain profiling tools also detect memory leaks
and memory access errors (potential sources of security violations). In general, the
functional testing team or the development team should have access to the
profiling tool, and the security testers should use the same tool to understand the
dynamic behavior of the software under test.
What can I do to make this program look good? Don't let developer's ego stand in
the way of vigorous testing. Test case selection is one key factor in successful
testing Insight and imagination are essential in the design of good test cases.
Testers should be independent from implementors to eliminate oversights due to
propagated misconceptions.
Levels of Program Correctness
1. No syntax errors [compilers and strong-typed programming languages]
2. No semantic errors
3. There exists some test data for which program gives a correct answer .
4. Program gives correct answer for reasonable or random test data .
5. Program gives correct answer for difficult test data
6. For all legal input data the program gives the correct answer .
7. For all legal input and all likely erroneous input the program gives a
correct or reasonable answer
_ Documentation faults.
_ Recovery faults.
Executing tests
All-uses coverage Traverse at least once every definition-free path from every
definition to all P-use or C-use of that definition.
All-C-uses/Some P-uses coverage Test so that all definition free paths from each
definition to all C-uses are tested. If a definition is used only in predicates, test at
least
one P-use.
All-P-uses/Some C-uses coverage Test so that all definition free paths from each
definition to all P-uses are tested. If a definition is used only in computations,
test at
least one C-use.
All-P-uses coverage Test so that all definition free paths from every definition to
every
P-use is tested.
Error Based Testing
– input domains
– Output domains
_ Error based testing tries to identify such domains and generate tests on both sides
of every boundary.
_ Test at and near (inside and outside) the boundaries of the programs
applicability
Software verification and validation activities check the software against its
specifications. Every project must verify and validate the software it produces.
This is done by:
Whatever the size of project, software verification and validation greatly affects
software quality. People are not infallible, and software that has not been verified
has little chance of working. Typically, 20 to 50 errors per 1000 lines of code are
found during development, and 1.5 to 4 per 1000 lines of code remain even after
system testing [Ref 20]. Each of these errors could lead to an operational failure or
non-compliance with a requirement. The objective of software verification and
validation is to reduce software errors to an acceptable level. The effort needed can
range from 30% to 90% of the total project resources, depending upon the
criticality and complexity of the software
The first definition of verification in the list above is the most general and includes
the other two. In ESA PSS-05-0, the first definition applies.
· unit testing;
· integration testing;
· system testing;
· acceptance testing;
· audit.
Three kinds of formal review are normally used for software verification:
· technical review;
· walkthrough;
· audits.
These reviews are all 'formal reviews' in the sense that all have specific objectives
and procedures. They seek to identify defects and discrepancies of the software
against specifications, plans and standards.
Software inspections are a more rigorous alternative to walkthroughs, and are
strongly recommended for software with stringent reliability, security and safety
requirements. Methods for software inspections are described in Section 3.2.
The UR/R, SR/R, AD/R and DD/R are formal reviews held at the end of a phase to
evaluate the products of the phase, and to decide whether the next phase may be
started (UR08, SR09, AD16 and DD11).
Critical design reviews are held in the DD phase to review the detailed design of a
major component to certify its readiness for implementation (DD10).
2.3.1.1 Objectives
· any changes have been properly implemented, and affect only those
systems identified by the change specification (described in a RID,
DCR or SCR).
2.3.1.2 Organisation
· a leader;
· a secretary;
· members.
In large and/or critical projects, the review team may be split into a review board
and a technical panel. The technical panel is usually responsible for processing
RIDs and the technical assessment of review items, producing as output a technical
panel report. The review board oversees the review procedures and then
independently assesses the status of the review items based upon the technical
panel report.
The review team members should have skills to cover all aspects of the review
items. Depending upon the phase, the review team may be drawn from:
· users;
· software engineers;
· software librarians;
Team members examine the review items and attend review meetings. If the
review items are large, complex, or require a range of specialist skills for effective
review, the leader may share the review items among members.
2.3.1.3 Input
· a statement of objectives;
2.3.1.4 Activities
preparation;
review meeting.
The review process may start when the leader considers the review items to be
stable and complete. Obvious signs of instability are the presence of TBDs or of
changes recommended at an earlier review meeting not yet implemented.
Adequate time should be allowed for the review process. This depends on the size
of project. A typical schedule for a large project (20 man years or more) is shown
in Table 2.3.1.4.
Event Time
Review Meeting R
Members may have to combine their review activities with other commitments,
and the review schedule should reflect this.
2.3.1.4.1 Preparation
The leader creates the agenda and distributes it, with the statements of objectives,
review items, specifications, plans, standards and guidelines (as appropriate) to the
review team.
Members then examine the review items. Each problem is recorded by completing
boxes one to six of the RID form. A RID should record only one problem, or group
of related problems. Members then pass their RIDs to the secretary, who numbers
each RID uniquely and forwards them to the author for comment. Authors add
their responses in box seven and then return the RIDs to the secretary.
The leader then categorises each RID as major, minor, or editorial. Major RIDs
relate to a problem that would affect capabilities, performance, quality, schedule
and cost. Minor RIDs request clarification on specific points and point out
inconsistencies. Editorial RIDs point out defects in format, spelling and grammar.
Several hundred RIDs can be generated in a large project review, and classification
is essential if the RIDs are to be dealt with efficiently. Failure to categorise the
RIDs can result in long meetings that concentrate on minor problems at the
expense of major ones.
Finally the secretary sorts the RIDs in order of the position of the discrepancy in
the review item. The RIDs are now ready for input to the review meeting.
Preparation for a Software Review Board follows a similar pattern, with RIDs
being replaced by SPRs and SCRs.
Introduction;
Classification of RIDs;
Review of the major RIDs;
Conclusion.
The introduction includes agreeing the agenda, approving the report of any
previous meetings and reviewing the status of outstanding actions.
After the preliminaries, authors present an overview of the review items. If this is
not the first meeting, emphasis should be given to any changes made since the
items were last discussed.
The leader then summarises the classification of the RIDs. Members may request
that RIDs be reclassified (e.g. the severity of a RID may be changed from minor to
major). RIDs that originate during the meeting should be held over for decision at
a later meeting, to allow time for authors to respond.
Major RIDs are then discussed, followed by the minor and editorial RIDs. The
outcome of the discussion of any defects should be noted by the secretary in the
review decision box of the RID form. This may be one of CLOSE, UPDATE,
ACTION or REJECT. The reason for each decision should be recorded. Closure
should be associated with the successful completion of an update. The nature of an
update should be agreed. Actions should be properly formulated, the person
responsible identified, and the completion date specified. Rejection is equivalent to
closing a RID with no action or update.
The conclusions of a review meeting should be agreed during the meeting. Typical
conclusions are:
Output
The output from the review is a technical review report that should contain the
following:
This output can take the form of the minutes of the meeting, or be a self-standing
report. If there are several meetings, the collections of minutes can form the report,
or the minutes can be appended to a report summarising the findings. The report
should be detailed enough for management to judge what happened. If there have
been difficulties in reaching consensus during the review, it is advisable that the
output be signed off by members.
2.3.2 Walkthroughs
2.3.2.2 Organisation
· a leader;
· a secretary;
· members.
The leader, helped by the secretary, is responsible for management tasks associated
with the walkthrough. The specific responsibilities of the leader include:
preparation;
review meeting.
Preparation
2.3.2.5 Output
2.3.3 Audits
2.3.3.1 Objectives
2.3.3.2 Organisation
of:
· a leader;
· members.
2.3.3.3 Input
· software products;
The team formed to carry out the audit should produce a plan
that defines the:
2.3.3.5 Output
· identifies the organisation being audited, the audit team, and the date
and place of the audit;
· defines the scope of the audit, particularly the audit criteria for
products and processes being audited;
· states conclusions;
· makes recommendations;
· lists actions.
2.4 TRACING
Tracing is 'the act of establishing a relationship between two or more products of
the development process; for example, to establish the relationship between a
given requirement and the design element that implements that requirement' [Ref
6]. There are two kinds of traceability:
· forward traceability;
· backward traceability.
· protocols;
· secure systems.
Good protocols and very secure systems depend upon having precise, logical
specifications with no loopholes.
Verification is strictly a paper exercise. It starts with taking all the design inputs:
specifications, government and industry regulations, knowledge taken from
previous designs, and any other information necessary for proper function. With
these requirements in hand you compare to your design outputs: drawings,
assembly instructions, test instructions, and electronic design files.
In the comparison you are ensuring that each requirement in the inputs is
accounted for in the outputs. Is each required test called out in the test instructions,
including the correct pass/fail criteria for each test? Are all product acceptance
criteria correct? Are all physical characteristics identified in the build instructions?
Requirement C/NCVerification
Drawing
Max Dimensions 1″ x 2″ x 4″ C 123456-7
The obvious importance in this step is to make sure that the design has not missed
addressing any requirements. If requirements are not compliant, meaning the
design does not meet a requirement, now is the time to know this and negotiate
with the customer if this requirement is necessary or can be relaxed.
Validation is the step where you actually build a version of the product, and would
be done against the requirements as modified after verification. This does not
necessarily mean the first production unit, but it can. It can also be an engineering
model, which some companies use to prove the first run of a complicated new
design, or it can be a portion of the design which is different from a previous
model, when the design is a modification of an already-proven design. Once you
decide what representative product you will build to prove the design, you fully
test it to make sure that the product, as designed, will meet all the necessary
requirements defined in the Design Inputs.
This will often require more testing that will be used on production models. To
ensure that all requirements are met, a full set of measurement and tests is done on
the validation unit. In some industries this is referred to as a First Article
Inspection (FAI), a first off, or Production Part Approval Process (PPAP).
Depending on customer requirements this can be recorded as a standalone
document, or an addition to the Statement of Compliance created in the verification
step.
After validation, the full set of requirements on one unit of most products can have
a reduced level of inspection and testing, depending on factors such as
requirements criticality or manufacturing process capability. A good product
validation can help decide which requirements need to be checked on every
product, and which do not.
Each of these steps is important in the design process because they serve two
distinct functions. Verification is a theoretical exercise designed to make sure that
no requirements are missed in the design, whereas validation is a practical exercise
that ensures that the product, as built, will function to meet the requirements.
Together, they ensure that the product designed will satisfy the customer needs,
and the needs of the customer are one of the key focuses for ISO 9001 and
improving Customer Satisfaction.
Software Testing - Validation Testing
The process of evaluating software during the development process or at the end of
the development process to determine whether it satisfies specified business
requirements.
Validation Testing ensures that the product actually meets the client's needs. It can
also be defined as to demonstrate that the product fulfills its intended use when
deployed on appropriate environment.
Unit Testing
Integration Testing
System Testing
User Acceptance Testing
What is Verification Testing ?
Verification is the process of evaluating work-products of a development
phase to determine whether they meet the specified requirements.
verification ensures that the product is built according to the requirements
and design specifications. It also answers to the question, Are we building
the product right?
Verification Testing - Workflow:
verification testing can be best demonstrated using V-Model. The artefacts
such as test Plans, requirement specification, design, code and test cases are
evaluated.
Activities:Reviews
Walkthroughs
Inspection
SOFTWARE TESTING
TH
UNIT 5 BCA 5 SEM
TESTING TOOLS
Introduction
• Aims to fix the defects or faults at the early stages of the development life cycle
and reduce the
A primary purpose for testing is to detect software failures so that defects may be
uncovered and corrected. This is a non-trivial pursuit. Testing cannot establish that
a product functions properly under all conditions but can only establish that it does
not function properly under specific conditions. The scope of software testing often
includes examination of code as well as execution of that code in various
environments and conditions as well as examining the aspects of code: does it do
what it is supposed to do and do what it needs to do. In the current culture of
software development, a testing organization may be separate from the
development team. There are various roles for testing team members. Information
derived from software testing may be used to correct the process by which
software is developed.
Not all software defects are caused are caused by coding errors. One common
source of expensive defects is caused by requirement gaps, e.g., unrecognized
requirements that result in errors of omission by the program designer. A common
source of requirements that result in errors of omission by the program designer. A
common source of requirements gaps is non-functional requirements such as
testability scalability, maintainability, usability, performance, and security.
Compatibility
A frequent cause of software failure is compatibility with another application, a
new operating system, or, increasingly, web browser version. In the case of lack of
backward compatibility, this can occur because the programmers have only
considered coding their programs for, or testing the software upon, “the latest
version of” this- or- that operating system. The unintended consequence of this fact
is that: their latest work might not be fully compatible with earlier mixtures of
software/ hardware, or it might not be fully compatible with another important
operating system. In any case, these differences, whatever they might be, may have
resulted in software failures, as witnessed by some significant population of
computer users.
• Assign responsibility to create Test case to personnel who are highly creative and
have expertise in implementing those test cases
• Test for all invalid and unexpected input conditions as well as valid input
conditions
• Identify and include expected results
A good software tester aims should be to design test cases that uncovers more
number of errors with a minimum amount of effort and time.
Software testing methods are traditionally divided into black box testing and white
box testing. These two approaches are used to describe the point of view that a test
engineer tasks when designing test cases.
The black box tester has no “bonds” with the code, and a tester’s perception is very
simple: a code must have bugs. Using the principle, “Ask and you shall receive,”
black box testers find bugs where programmers don’t. But, on the other hand,
black box testing has been said to be “like a walk in a dark labyrinth without a
flashlight, “because the tester doesn’t know how the software being tested was
actually constructed. That’s why there are situations when (1) a black box tester
writes many test cases to check something that can be tested by only one test case,
and/or (2) some parts of the black and are not tested at all.
Therefore, black box testing has the advantage of “an unaffiliated opinion,” on the
one hand, and the disadvantage of “blind exploring,” on the other.
White box testing, by contrast to black box testing, is when the tester has access to
the internal data structures and algorithms including the code that implement these.
Grey box testing involves having access to internal data structures and algorithms
for purposes of designing the test cases, but testing at the user, or black-box level.
Manipulating input data and formatting output do not qualify as grey box, because
the input and output are clearly outside of the “black-box” that we are calling the
system under test. This distinction is particularly important when conducting
integration testing between two modules of code written by two different
developers, where only the interfaces are ‘exposed for test. However, modifying a
data repository does qualify as grey box, as the user would not normally be able to
change the data outside of the system under test. Grey box testing may also include
reverse engineering to determine, for instance, boundary values or error messages.
Acceptance Testing
Acceptance testing can mean one of two things:
Regression Testing
Regression testing is any type of software testing that seeks to uncover software
regressions. Such regression occurs whenever software functionality that was
previously working correctly stops working as intended. Typically regression
testing includes re-running previously working correctly stops working as
intended. Typically regressions occur as an unintended consequence of program
changes. Common methods of regression testing include re-running previously run
tests and checking whether previously fixed faults have re- emerged.
• Performance testing checks to see if the software can handle large quantities of
data or users. This is generally referred to as software scalability. This activity of
Non Functional Software Testing is often times referred to as Load Testing.
• Stability testing checks to see if the software can continuously function well in or
above an acceptable period. This activity of Non Functional Software Testing is
often times referred to as enduration test.
• Usability testing is needed to check if the user interface is easy to use and
understand.
• Security testing is essential for software which processes confidential data and to
prevent system intrusion by hackers.
• Unit testing tests the minimal software component, or module. Each unit (basic
component) of the software is tested to verify that the detailed design for the unit
has been correctly implemented. In an object-oriented environment, this is usually
at the class level, and the minimal unit tests include the constructors and
destructors.
• System testing tests a completely integrated system to verify that it meets its
requirements.
Before shipping the final version of software, alpha and beta testing are often done
additionally:
• Beta testing comes after alpha testing. Versions of the software, known as beta
versions, are released to a limited audience outside of the programming team. The
software is released to groups of people so that further testing can ensure the
product has few faults or bugs. Sometimes, beta versions are made available to the
open public to increase the feedback field to a maximal number of future users.
Finally, acceptance testing can be conducted by the end-user, customer, or client to
validate whether or not to accept the product. Acceptance testing may be
performed as part of the hand- off process between any two phases of
development.
More specific forms of regression testing are known as sanity testing, when
quickly checking for bizarre behavior, and smoke testing when testing for basic
functionality.
There are number of common software measures, often called “metrics”, which are
used to measure the stat of the software or the adequacy of the testing.
Testing Artifacts
Test case
Test script
The test script is the combination of a test case, test procedure, and data. Initially
the term was derived from the product of work created by automated regression
test tools. Today, test scripts can be manual, automated, or a combination of both.
Test data
Test suite
The most common tam for a collection of test cases is a test suite. The test suit
often also contains more detailed instructions or goals for each collection of test
cases. It definitely contains a section where the tester identifies the system
configuration used during testing. A group of test cases may also contain
prerequisite states or steps, and descriptions of the following tests.
Test plan
A test specification is called a test plan. The developers are well aware what test
plans will be executed and this information is made available to the developers.
This makes the developers more cautious when developing their cod. This ensures
that the developer’s code is not passed through any surprise test case or test plans.
Test harness
The software, tools, samples of data input and output, and configurations are all
referred to collectively as a test harness.
Traceability matrix
Testing tools
With the trend of automating the task, there are many software tools available,
which may be used to help the testing process. Such tools help to achieve accurate
results in reduced time and effort. The availability and capability of testing tools
often enhance the level of testability of software.
Program testing and fault detection can be aided significantly by testing tools and
debuggers. Testing/debug tools include features such as:
Program monitors, permitting full or partial monitoring of program code including:
Performance analysis, or profiling tools that can help to highlight hot spots and
resource usage
Just as there are software tools available to assist in the basic building of software
code, there are tools that monitor how software is behaving as it runs. These
software analysis tools offer visibility into the execution history of an application.
There are four basic types of software analysis tools:
> Code Coverage—Measures the amount of the software that has been
executed
> Instruction Trace—Creates a record of exactly what happens as the code is
executed
> Memory Analysis—Tracks the code's memory usage and identifies possible
errors
> Performance Analysis—Identifies performance bottlenecks and other issues
allowing fine-tuning of the application for higher performance
The primary difference between software analysis tools and traditional debugging
is that software analysis tools do not require you to stop the application to test it.
Debugging involves starting and stopping the software repeatedly to examine the
code that was executed in order to understand the control flow inside the
application.
The debugging method is problematic in an embedded environment, where
systems cannot always be stopped, or where stopping the system inhibits or skews
the analysis. Consider the powertrain software that controls an automobile's
transmission system. With a real-time system such as this one, it is important to
use tools that can gather information and monitor the control flows as the
application runs, so the developer can be assured that performance and other
operational design specifications are met. This technique helps ensure that the
system will not fail once deployed.
Traditional debugging also comes up short in environments that have multiple
processors within the same system. Increasingly common in today's systems,
multiple threads have multiple tasks running at any given time. It is important to
monitor what is happening in each thread—and how the threads are interacting—
without disturbing the other threads by stopping the application. The ideal software
analysis tool attaches to a running system with as little intrusion as possible.
Static analysis tools are generally used by developers as part of the development
and component testing process. The key aspect is that the code (or other artefact) is
not executed or run but the tool itself is executed, and the source code we are
interested in is the input data to the tool.
Static analysis tools for code can help the developers to understand the structure of
the code, and can also be used to enforce coding standards.
These tools are mostly used by developers. These two types of tool are grouped
together because they are variants of the type of support needed by developers
when testing individual components or units of software. A test harness provides
stubs and drivers, which are small programs that interact with the software under
test (e.g. for testing middleware and embedded software). Some unit test
framework tools provide support for object-oriented software, others for other
development paradigms. Unit test frameworks can be used in agile development to
automate the tests parallely with development. Both types of tool enable the
developer to test, identify and localize any defects. The stubs and drivers supply
any information needed by the software being tested (e.g. an input given by the
user) and also receive any information sent by the software (e.g. a value to be
displayed on a screen). Stubs may also be referred to as ‘mock objects’.
There are many ‘xUnit’ tools for different programming languages, e.g. JUnit for
Java, N Unit for .Net applications, etc. There are both commercial tools and also
open-source (i.e. free) tools. Unit test framework tools are very similar to test
execution tools, since they provide facilities such as the ability to store test cases
and monitor whether tests pass or fail, for example.
The main difference is that there is no capture/playback facility and they tend to be
used at a lower level, i.e. for component or component integration testing, rather
than for system or acceptance testing.
A test comparator helps to automate the comparison between the actual and the
expected result produced by the software.
There are two ways in which actual results of a test can be compared to the
expected results for the test.:
Debuggers
Introduction
Debugging means locating (and then removing) bugs, i.e., faults, in programs. In
the entire process of program development errors may occur at various stages and
efforts to detect and remove them may also be made at various stages. However,
the word debugging is usually in context of errors that manifest while running the
program during testing or during actual use. The most common steps taken in
debugging are to examine the flow of control during execution of the program,
examine values of variables at different points in the program, examine the values
of parameters passed to functions and values returned by the functions, examine
the function call sequence, etc. In the absence of other mechanisms, one usually
inserts statements in the program at various carefully chosen points, that prints
values of significant variables or parameters, or some message that indicates the
flow of control (or function call sequence). When such a modified version of the
program is run, the information output by the extra statements gives clue to the
errors.
Most debuggers allow the user to refer to the program information in terms of
symbols of source program, viz., variable names, subroutine names, parameter
names, field names of composite data structures (records), source program line
numbers (for specifying breakpoints), etc. Since an executable program usually do
not contain the mappings from source program symbols to target program
addresses, hence to be useful to a debugger, the compiler must include such
mappings in the executable program as additional information (say debugger
information). Most compilers support some invocation-option for this purpose (eg.
in Unix/Linux, the cc option -g). Format of this information created by a compiler
must be understandable by a debugger.
Principle of Operation
The principle of operation of a debugger can be understood by considering a
simple view - from the specially compiled executable program the debugger reads
the debugger information into its own data structures. The interactive features of
the debugger is in the form of a module (that can be invoked as a function call or
through a software interrupt). The interactive interface is invoked by the debugger
once at the beginning. The user can specify breakpoints through the interface and
then tell the debugger to start execution of the program to be debugged. The
program will continue till the first breakpoint is encountered. At that point the
control is transfered to the interactive interface. The programmer can carry out
various kinds of operations that are supported in that interface.
Setting of breakpoints
To make the user program stop at specified points, the debugger inserts certain
statement at those points that would transfer control to the interactive interface
module. These statements might be in the form of function call instruction or
software interrupt instruction. Usually in programs compiled specifically for
debugging, the compiler inserts NOP instructions after the translation of each
statement of the source program. So the debugger can simply replace a NOP
instruction by the function call or interrupt instruction. In some cases where NOP
instructions are not there,the debugger replaces valid instructions of the program to
insert its own function call (or interrupt) instruction, but takes care to have the
original instructions executed whenever execution proceeds through that point.
What constitutes an “acceptable defect rate” depends on the nature of the software.
For example, an arcade video game designed to simulate flying an airplane would
presumably have a much higher tolerance for defects than mission critical software
such as that used to control the functions of an airliner that rally is flying!
Although there are close links with SQA, testing departments often exist
independently, and there may be no SQA function in some companies. Software
Testing is a task intended to detect in software by contrasting a computer
program’s expected results with its actual results for a given set of inputs. By
contrast, QA (Quality Assurance) is the implementation of policies and procedures
intended to prevent defects from occurring in the first place .Software Quality
Control (SQC) is a set of activities for ensuring quality in software products.
Reviews
Requirement Review
Design Review
Code Review
Testing
Unit Testing
Integration Testing
System Testing
Acceptance Testing
Integrity Is it secure?
Accuracy Instrumentation
Communication commonality
Communicativeness Self-documentation
Conciseness Simplicity
Auditing
Training
Project Management
Configuration Management
Requirements Development/Management
Estimation
Software Design
Testing
Once the processes have been defined and implemented, Quality Assurance has the
following responsibilities:
SQA includes
Often this group has the final veto over the release of a software product
SQA Activities
3. Software Testing
4. Enforcement of standards
Documentation standards
Testing standards
5. Control of Change
Approving authority makes decision on which changes get implemented and when
Programmers are not permitted to make unapproved changes to the software
Opportunity to evaluate the impact and cost of changes before committing
Resources ,Evaluate effect of proposed changes on software quality
6. Measurement
Below are the Quality assurance criteria against which the software would be
evaluated against:
Correctness,efficiency,flexibility,integrity,interoperability,maintainability,portabili
ty,reliability,reusability, stability,usability
Plan - It is the stage where the Quality control processes are planned
To avoid making errors by proper planning and execution with correct review
process.
Software Quality Management ensures that the required level of quality is achieved
by submitting improvements to the product development process. SQA aims to
develop a culture within the team and it is seen as everyone's responsibility.
Quality Control - Ensure that best practices and standards are followed by the
software development team to produce quality products.
Differences between Software Quality Assurance (SQA) and Software Quality
Control (SQC):
Training
The quality management system under which the software system is created is
normally based on one or more of the following models/standards:
ISO 9000
CMMI
The effort required to obtain ISO 9000 registration varies directly with how closely
an organization’s process fits the ISO 9000 model.
ISO registration can cost a lot of time, effort and money to achieve. It requires
continuing effort to stay registered
A user view of software quality.ISO 9126 doesn’t address software process issues.
ISO 9000
The emphasis in the ISO standard is on documentation of the process and the
managing of the process.
9. Process control.
ISO 9000 Certification focuses on how well the processes are documented,
not on the quality of the process.Many companies do the minimum required to
achieve ISO 9000 certification for business reasons, but forget about it as soon as
the ISO 9000 inspectors
ISO 9000 forces companies to act in ways which make things worse for their
customers. ISO 9000 is based on the faulty premise that work is best controlled by
specifying and controlling procedures.
The Capability Maturity Model for Software (CMM) is a five level model laying
out
1. Initial – ad hoc
feedback from the process and from piloting innovative ideas and
technologies.
CMM Levels and Key Process Areasa
1. Initial level
2. Repeatable level
Requirements management
3.Defined level
Training Program
Integration software management
Intergroup coordination
Peer reviews
4. Managed level
5. Optimizing level
Defect prevention
1. Initial
2. Repeatable
3. Defined.
Workforce planning
4. Managed
Organizational-competency management
Team-based practices
Team building
Mentoring
5. Optimizing
Coaching