Software Testing
Software Testing
Software testing
LECTURE NOTES ON
SOFTWARE TESTING
(15A05505)
III B.TECH I SEMESTER
(JNTUA-R15)
CONTENTS
S.No. TOPICS Page No.
1 Unit-I : Purpose of Testing 4
1.1 Introduction : Purpose of Testing 4
1.2 Unit-I notes 4-29
1.3 Unit-I 2Marks Questions with Answers 30-34
1.4 Part A Questions 35
1.5 Part B Questions 35
2 Unit-II : Transaction Flow Testing 36
2.1 Introduction : Transaction Flows 36
2.2 Unit-II notes 36-48
2.3 Unit-II 2Marks Questions with Answers 49-51
2.4 Part A Questions 52
2.5 Part B Questions 52
3 Unit-III : Domain Testing 53
3.1 Introduction : Domains and Paths 53
3.2 Unit-III notes 53-66
3.3 Unit-III 2Marks Questions with Answers 67-68
3.4 Part A Questions 69
3.5 Part B Questions 69
4 Unit-IV : Paths, Path products and Regular expressions 70
4.1 Introduction : Path Products & Path Expression 70
4.2 Unit-IV notes 70-102
4.3 Unit-IV 2Marks Questions with Answers 103-104
4.4 Part A Questions 105
4.5 Part B Questions 105
5 Unit-V : State, State Graphs and Transition Testing 106
5.1 Introduction : State Graphs 106
5.2 Unit-V notes 106-125
5.3 Unit-V 2Marks Questions with Answers 126-125
5.4 Part A Questions 127
5.5 Part B Questions 127
SOFTWARE TESTING
UNIT I
Introduction: Purpose of Testing, Dichotomies, Model for Testing, Consequences of Bugs, Taxonomy of Bugs.
Flow graphs and Path testing: Basics Concepts of Path Testing, Predicates, Path Predicates and Achievable Paths, Path
Sensitizing, Path Instrumentation, Application of Path Testing.
PURPOSE OF TESTING
Testing consumes at least half of the time and work required to produce a functional program.
MYTH: Good programmers write code without bugs. (Its wrong!!!)
History says that even well written programs still have 1-3 bugs per hundred statements.
Productivity and Quality in software:
❖ In production of consumer goods and other products, every manufacturing stage is subjected
to quality control and testing from component to final stage.
❖If flaws are discovered at any stage, the product is either discarded or cycled back for rework and correction.
❖Productivity is measured by the sum of the costs of the material, the rework, and the discarded components,
and the cost of quality assurance and testing.
❖ There is a tradeoff between quality assurance costs and manufacturing costs: If sufficient time
is not spent in quality assurance, the reject rate will be high and so will be the net cost. If
inspection is good and all errors are caught as they occur, inspection costs will dominate, and
again the net cost will suffer.
❖ Testing and Quality assurance costs for 'manufactured' items can be as low as 2% in consumer
products or as high as 80% in products such as space-ships, nuclear reactors, and aircrafts,
where failures threaten life. Where as the manufacturing cost of a software is trivial.
❖ The biggest part of software cost is the cost of bugs: the cost of detecting them, the cost of
correcting them, the cost of designing tests that discover them, and the cost of running those
tests.
❖ For software, quality and productivity are indistinguishable because the cost of a software copy
is trivial.
Testing and Test Design are parts of quality assurance should also focus on bug prevention. A prevented
bug isbetter than a detected and corrected bug.
Phases in a tester's mental life can be categorized into the following 5 phases:
1. Phase 0: (Until 1956: Debugging Oriented) There is no difference between testing and debugging. Phase 0
thinking was the norm in early days of software development till testing emerged as a discipline.
2. Phase 1: (1957-1978: Demonstration Oriented) The purpose of testing here is to show that software works.
Highlighted during the late 1970s. This failed because the probability of showing that software works
'decreases' as testing increases. i.e. The more you test, the more likely you'ill find a bug.
3. Phase 2: (1979-1982: Destruction Oriented) The purpose of testing is to show that software doesnt work. This
also failed because the software will never get released as you will find one bug or the other. Also, a bug
corrected may also lead to another bug.
4. Phase 3: (1983-1987: Evaluation Oriented) The purpose of testing is not to prove anything but to reduce the
perceived risk of not working to an acceptable value (Statistical Quality Control). Notion is that testing does
improve the product to the extent that testing catches bugs and to the extent that those bugs are fixed. The
product is released when the confidence on that product is high enough. (Note: This is applied to large
software products with millions of code and years of use.)
5. Phase 4: (1988-2000: Prevention Oriented) Testability is the factor considered here. One reason is to reduce
the labour of testing. Other reason is to check the testable and non-testable code. Testable code has fewer
bugs than the code that's hard to test. Identifying the testing techniques to test the code is the main key here.
Test Design: We know that the software code must be designed and tested, but many appear to be unaware that
tests themselves must be designed and tested. Tests should be properly designed and tested before applying it to
the acutal code.
Testing is'nt everything: There are approaches other than testing to create better software. Methods other than
testing include:
1. Inspection Methods: Methods like walkthroughs, deskchecking, formal inspections and code reading appear to
be as effective as testing but the bugs caught donot completely overlap.
2. Design Style: While designing the software itself, adopting stylistic objectives such as testability, openness and
clarity can do much to prevent bugs.
3. Static Analysis Methods: Includes formal analysis of source code during compilation. In earlier days, it is a
routine job of the programmer to do that. Now, the compilers have taken over that job.
4. Languages: The source language can help reduce certain kinds of bugs. Programmers find new bugs while using
new languages.
5. Development Methodologies and Development Environment: The development process and the environment
in which that methodology is embedded can prevent many kinds of bugs.
DICHOTOMIES
Testing Versus Debugging: Many people consider both as same. Purpose of testing is to show that a program has
bugs. The purpose of testing is to find the error or misconception that led to the program's failure and to design
and implement the program changes that correct the error.
Debugging usually follows testing, but they differ as to goals, methods and most important psychology. The below
tab le shows few important differences between testing and debugging.
Testing Debugging
Testing starts with known conditions, uses predefined Debugging starts from possibly unknown intial conditions
procedures and has predictable outcomes. and the end can not be predicted except statistically.
Testing can and should be planned, designed and Procedure and duration of debugging cannot be so
scheduled. constrained.
Testing is a demonstration of error or apparent
Debugging is a deductive process.
correctness.
Testing proves a programmer's failure. Debugging is the programmer's vindication (Justification).
Testing, as executes, should strive to be predictable, Debugging demands intutive leaps, experimentation and
dull, constrained, rigid and inhuman. freedom.
Much testing can be done without design knowledge. Debugging is impossible without detailed design knowledge.
Testing can often be done by an outsider. Debugging must be done by an insider.
Much of test execution and design can be automated. Automated debugging is still a dream.
Function versus Structure: Tests can be designed from a functional or a structural point of view. In functional
testing, the program or system is treated as a black box. It is subjected to inputs, and its outputs are verified for
conformance to specified behaviour. Functional testing takes the user point of view- bother about functionality and
features and not the program's implementation. Structural testing does look at the implementation details. Things
such as programming style, control method, source language, database design, and coding details dominate
structural testing.
Both Structural and functional tests are useful, both have limitations, and both target different kinds of bugs.
Functional tets can detect all bugs but would take infinite time to do so. Structural tests are inherently finite but
cannot detect all errors even if completely executed.
Designer Versus Tester: Test designer is the person who designs the tests where as the tester is the one actually
tests the code. During functional testing, the designer and tester are probably different persons. During unit
testing, the tester and the programmer merge into one person.
Tests designed and executed by the software designers are by nature biased towards structural consideration and
therefore suffer the limitations of structural testing.
Modularity Versus Efficiency: A module is a discrete, well-defined, small component of a system. Smaller the
modules, difficult to integrate; larger the modules, difficult to understand. Both tests and systems can be modular.
Testing can and should likewise be organised into modular components. Small, independent test cases can be
designed to test independent modules.
Small versus Large: Programming in large means constructing programs that consists of many components written
by many different programmers. Programming in the small is what we do for ourselves in the privacy of our own
offices. Qualitative and Quantitative changes occur with size and so must testing methods and quality criteria.
Builder Versus Buyer: Most software is written and used by the same organization. Unfortunately, this situation is
dishonest because it clouds accountability. If there is no separation between builder and buyer, there can be no
accountability.
The different roles / users in a system include:
1. Builder: Who designs the system and is accountable to the buyer.
2. Buyer: Who pays for the system in the hope of profits from providing services.
3. User: Ultimate beneficiary or victim of the system. The user's interests are also guarded by.
4. Tester: Who is dedicated to the builder's destruction.
5. Operator: Who has to live with the builders' mistakes, the buyers' murky (unclear) specifications, testers'
oversights and the users' complaints.
Above figure is a model of testing process. It includes three models: A model of the environment, a model of the
program and a model of the expected bugs.
ENVIRONMENT:
❖ A Program's environment is the hardware and software required to make it run. For online systems, the
environment may include communication lines, other systems, terminals and operators.
❖ The environment also includes all programs that interact with and are used to create the program under test -
such as OS, linkage editor, loader, compiler, utility routines.
❖ Because the hardware and firmware are stable, it is not smart to blame the environment for bugs.
PROGRAM:
❖ Most programs are too complicated to understand in detail.
❖ The concept of the program is to be simplified inorder to test it.
❖ If simple model of the program doesnot explain the unexpected behaviour, we may have to modify that model
to include more facts and details. And if that fails, we may have to modify the program.
BUGS:
❖ Bugs are more insidious (deceiving but harmful) than ever we expect them to be.
❖ An unexpected test result may lead us to change our notion of what a bug is and our model of
bugs.
❖ Some optimistic notions that many programmers or testers have about bugs are usually unable to test
effectively and unable to justify the dirty tests most programs need.
❖ OPTIMISTIC NOTIONS ABOUT BUGS:
1. Benign Bug Hypothesis: The belief that bugs are nice, tame and logical. (Benign: Not
Dangerous)
2. Bug Locality Hypothesis: The belief that a bug discovered with in a component effects only that
component's behaviour.
3. Control Bug Dominance: The belief that errors in the control structures (if, switch etc) of programs
dominate the bugs.
4. Code / Data Separation: The belief that bugs respect the separation of code and data.
5. Lingua Salvator Est: The belief that the language syntax and semantics (e.g. Structured Coding, Strong
typing, etc) eliminates most bugs.
6. Corrections Abide: The mistaken belief that a corrected bug remains corrected.
7. Silver Bullets: The mistaken belief that X (Language, Design method, representation, environment) grants
immunity from bugs.
8. Sadism Suffices: The common belief (especially by independent tester) that a sadistic streak, low cunning,
and intuition are sufficient to eliminate most bugs. Tough bugs need methodology and techniques.
9. Angelic Testers: The belief that testers are better at test design than programmers are at
code design.
TESTS:
❖ Tests are formal procedures, Inputs must be prepared, Outcomes should predicted, tests should be
documented, commands need to be executed, and results are to be observed. All these errors are subjected to
error
❖ We do three distinct kinds of testing on a typical software system. They are:
1. Unit / Component Testing: A Unit is the smallest testable piece of software that can be compiled,
assembled, linked, loaded etc. A unit is usually the work of one programmer and consists of several
hundred or fewer lines of code. Unit Testing is the testing we do to show that the unit does not satisfy its
functional specification or that its implementation structure does not match the intended design
structure. A Component is an integrated aggregate of one or more units. Component Testing is the testing
we do to show that the component does not satisfy its functional specification or that its implementation
structure does not match the intended design structure.
2. Integration Testing: Integration is the process by which components are aggregated to create larger
components. Integration Testing is testing done to show that even though the components were
individually satisfactory (after passing component testing), checks the combination of components are
incorrect or inconsistent.
3. System Testing: A System is a big component. System Testing is aimed at revealing bugs that cannot be
attributed to components. It includes testing for performance, security, accountability, configuration
sensitivity, startup and recovery.
Role of Models: The art of testing consists of creating , selecting, exploring, and revising models. Our ability to go
through this process depends on the number of different models we have at hand and their ability to express a
program's behaviour.
CONSEQUENCES OF BUGS
IMPORTANCE OF BUGS: The importance of bugs depends on frequency, correction cost, installation cost, and
consequences.
1. Frequency: How often does that kind of bug occur? Pay more attention to the more frequent bug types.
2. Correction Cost: What does it cost to correct the bug after it is found? The cost is the sum of 2 factors: (1) the
cost of discovery (2) the cost of correction. These costs go up dramatically later in the development cycle when
the bug is discovered. Correction cost also depends on system size.
3. Installation Cost: Installation cost depends on the number of installations: small for a single user program but
more for distributed systems. Fixing one bug and distributing the fix could exceed the entire system's
development cost.
4. Consequences: What are the consequences of the bug? Bug consequences can range from mild to catastrophic.
A reasonable metric for bug importance is
Importance= ($) = Frequency * (Correction cost + Installation cost + Consequential cost)
CONSEQUENCES OF BUGS: The consequences of a bug can be measure in terms of human rather than machine.
Some consequences of a bug on a scale of one to ten are:
1. Mild: The symptoms of the bug offend us aesthetically (gently); a misspelled output or a misaligned printout.
2. Moderate: Outputs are misleading or redundant. The bug impacts the system's performance.
3. Annoying: The system's behavior because of the bug is dehumanizing. E.g. Names are truncated orarbitarily
modified.
4. Disturbing: It refuses to handle legitimate (authorized / legal) transactions. The ATM wont give you money. My
credit card is declared invalid.
5. Serious: It loses track of its transactions. Not just the transaction itself but the fact that the transaction
occurred. Accountability is lost.
6. Very Serious: The bug causes the system to do the wrong transactions. Instead of losing your paycheck, the
system credits it to another account or converts deposits to withdrawals.
7. Extreme: The problems aren't limited to a few users or to few transaction types. They are frequent and
arbitrary instead of sporadic infrequent) or for unusual cases.
8. Intolerable: Long term unrecoverable corruption of the database occurs and the corruption is not easily
discovered. Serious consideration is given to shutting the system down.
9. Catastrophic: The decision to shut down is taken out of our hands because the system fails.
10. Infectious: What can be worse than a failed system? One that corrupt other systems even though it doesnot
fall in itself ; that erodes the social physical environment; that melts nuclear reactors and starts war.
FLEXIBLE SEVERITY RATHER THAN ABSOLUTES:
o Quality can be measured as a combination of factors, of which number of bugs and their severity is only one
component.
o Many organizations have designed and used satisfactory, quantitative, quality metrics.
o Because bugs and their symptoms play a significant role in such metrics, as testing progresses, you see the
quality rise to a reasonable value which is deemed to be safe to ship the product.
o The factors involved in bug severity are:
1. Correction Cost: Not so important because catastrophic bugs may be corrected easier and small bugs may
take major time to debug.
2. Context and Application Dependency: Severity depends on context and the application in which it is used.
3. Creating Culture Dependency: Whats important depends on the creators of software and their cultural
aspirations. Test tool vendors are more sensitive about bugs in their software then games software
vendors.
4. User Culture Dependency: Severity also depends on user culture. Naive users of PC software go crazy over
bugs where as pros (experts) may just ignore.
5. The software development phase: Severity depends on development phase. Any bugs gets more severe as
it gets closer to field use and more severe the longer it has been around.
TAXONOMY OF BUGS:
There is no universally correct way categorize bugs. The taxonomy is not rigid.
A given bug can be put into one or another category depending on its history and the programmer's state of mind.
The major categories are: (1) Requirements, Features and Functionality Bugs (2) Structural Bugs (3) Data Bugs (4)
Coding Bugs (5) Interface, Integration and System Bugs (6) Test and Test Design Bugs.
Data bugs:
Data bugs include all bugs that arise from the specification of data objects, their formats, the number of such
objects, and their initial values.
Data Bugs are at least as common as bugs in code, but they are often treated as if they did not exist at all.
Code migrates data: Software is evolving towards programs in which more and more of the control and
processing functions are stored in tables.
Because of this, there is an increasing awareness that bugs in code are only half the battle and the data problems
should be given equal attention.
Dynamic Data Vs Static data:
Dynamic data are transitory. Whatever their purpose their lifetime is relatively short, typically the processing
time of one transaction. A storage object may be used to hold dynamic data of different types, with different
formats, attributes and residues.
Dynamic data bugs are due to leftover garbage in a shared resource. This can be handled in one of the three
ways: (1) Clean up after the use by the user (2) Common Cleanup by the resource manager (3) No Clean up
Static Data are fixed in form and content. They appear in the source code or database directly or indirectly,
for example a number, a string of characters, or a bit pattern.
Compile time processing will solve the bugs caused by static data.
Information, parameter, and control: Static or dynamic data can serve in one of three roles, or in combination of
roles: as a parameter, for control, or for information.
Content, Structure and Attributes: Content can be an actual bit pattern, character string, or number put into
a data structure. Content is a pure bit pattern and has no meaning unless it is interpreted by a hardware or
software processor. All data bugs result in the corruption or misinterpretation of content. Structure relates
to the size, shape and numbers that describe the data object, that is memory location used to store the
content. (e.g A two dimensional array). Attributes relates to the specification meaning that is the semantics
associated with the contents of a data object. (e.g. an integer, an alphanumeric string, a subroutine). The
severity and subtlelty of bugs increases as we go from content to attributes because the things get less formal
in that direction.
Coding bugs:
Coding errors of all kinds can create any of the other kind of bugs.
Syntax errors are generally not important in the scheme of things if the source language translator has adequate
syntax checking.
If a program has many syntax errors, then we should expect many logic and coding bugs.
The documentation bugs are also considered as coding bugs which may mislead the maintenance programmers.
calls.
This approach may not eliminate the bugs but at least will localize them and make testing easier.
Software Architecture:
Software architecture bugs are the kind that called - interactive.
Routines can pass unit and integration testing without revealing such bugs.
Many of them depend on load, and their symptoms emerge only when the system is stressed.
Sample for such bugs: Assumption that there will be no interrupts, Failure to block or un block interrupts,
Assumption that memory and registers were initialized or not initialized etc
Careful integration of modules and subjecting the final system to a stress test are effective methods for these
bugs.
Control and Sequence Bugs (Systems Level):
These bugs include: Ignored timing, Assuming that events occur in a specified sequence, Working on data before
all the data have arrived from disc, Waiting for an impossible combination of prerequisites, Missing, wrong,
redundant or superfluous process steps.
The remedy for these bugs is highly structured sequence control.
Specialize, internal, sequence control mechanisms are helpful.
Resource Management Problems:
Memory is subdivided into dynamically allocated resources such as buffer blocks, queue blocks, task control
blocks, and overlay buffers.
External mass storage units such as discs, are subdivided into memory resource pools.
Some resource management and usage bugs: Required resource not obtained, Wrong resource used, Resource
is already in use, Resource dead lock etc
Resource Management Remedies: A design remedy that prevents bugs is always preferable to a test method
that discovers them.
The design remedy in resource management is to keep the resource structure simple: the fewest different kinds
of resources, the fewest pools, and no private resource management.
Integration Bugs:
Integration bugs are bugs having to do with the integration of, and with the interfaces between, working and
tested components.
These bugs results from inconsistencies or incompatibilities between components.
The communication methods include data structures, call sequences, registers, semaphores, communication
links and protocols results in integration bugs.
The integration bugs do not constitute a big bug category(9%) they are expensive category because they are
usually caught late in the game and because they force changes in several components and/or data structures.
System Bugs:
System bugs covering all kinds of bugs that cannot be ascribed to a component or to their simple interactions,
but result from the totality of interactions between many components such as programs, data, hardware, and
the operating systems.
There can be no meaningful system testing until there has been thorough component and integration testing.
System bugs are infrequent(1.7%) but very important because they are often found only after the system has
been fielded.
Test and test design bugs:
Testing: testers have no immunity to bugs. Tests require complicated scenarios and databases.
They require code or the equivalent to execute and consequently they can have bugs.
Test criteria: if the specification is correct, it is correctly interpreted and implemented, and a proper test has been
designed; but the criterion by which the software's behavior is judged may be incorrect or impossible. So, a proper
test criteria has to be designed. The more complicated the criteria, the likelier they are to have bugs.
10
Flow graphs and Path testing: Basics Concepts of Path Testing, Predicates, Path Predicates and Achievable Paths, Path
Sensitizing, Path Instrumentation, Application of Path Testing.
o The first step in translating the program to a flowchart is shown, where we have the typical one-for-one
classical flowchart. Note that complexity has increased, clarity has decreased, and that we had to add auxiliary
labels (LOOP, XX, and YY), which have no actual program counterpart. We merged the process steps and
replaced them with the single process box. We now have a control flow graph. But this representation is still
too busy. We simplify the notation further where for the first time we can really see what the control flow
looks like.
The final transformation is shown in Figure, where we've dropped the node numbers to achieve an even simpler
representation. The way to work with control flow graphs is to use the simplest possible representation - that is, no
more information than you need to correlate back to the source program or PDL.
A flow graph is a pictorial representation of a program and not the program itself, just as a topographic map.
You cant always associate the parts of a program in a unique way with flow graph parts because many program
structures, such as if-then-else constructs, consists of a combination of decisions, junctions, and processes.
The translation from a flow graph element to a statement and vice versa is not always unique.
Figure: Alternative Flow graphs for same logic (Statement "IF (A=0) AND (B=1) THEN . . .").
An improper translation from flow graph to code during coding can lead to bugs, and improper translation
during the test design lead to missing test cases and causes undiscovered bugs.
For X negative, the output is X + A, while for X greater than or equal to zero, the output is X + 2A. Following prescription
2 and executing every statement, but not every branch, would not reveal the bug in the following incorrect version:
A negative value produces the correct answer. Every statement can be executed, but if the test cases do not force each
branch to be taken, the bug can remain hidden. The next example uses a test based on executing each branch but does
not force the execution of all statements:
The hidden loop around label 100 is not revealed by tests based on prescription 3 alone because no test forces
the execution of statement 100 and the following GOTO statement. Furthermore, label 100 is not flagged by
the compiler as an unreferenced label and the subsequent GOTO does not refer to an undefined label.
A Static Analysis (that is, an analysis based on examining the source code or structure) cannot determine
whether a piece of code is or is not reachable. There could be subroutine calls with parameters that are
subroutine labels, or in the above example there could be a GOTO that targeted label 100 but could never
achieve a value that would send the program to that label.
Only a Dynamic Analysis (that is, an analysis based on the code's behavior while running - which is to say, to all
intents and purposes, testing) can determine whether code is reachable or not and therefore distinguish
between the ideal structure we think we have and the actual, buggy structure.
After you have traced a a covering path set on the master sheet and filled in the table for every path, check the
following:
1. Does every decision have a YES and a NO in its column? (C2)
2. Has every case of all case statements been marked? (C2)
3. Is every three - way branch (less, equal, greater) covered? (C2)
4. Is every link (process) covered at least once? (C1)
Loops:
Cases for a single loop: A Single loop can be covered with two cases: Looping and Not looping. But, experience
shows that many loop-related bugs are not discovered by C1+C2. Bugs hide themselves in corners and
congregate at boundaries - in the cases of loops, at or around the minimum or maximum number of times the
loop can be iterated. The minimum number of iterations is often zero, but it need not be.
CASE 1: Single loop, Zero minimum, N maximum, No excluded values
1. Try bypassing the loop (zero iterations). If you can't, you either have a bug, or zero is not the minimum
and you have the wrong case.
2. Could the loop-control variable be negative? Could it appear to specify a negative number of
iterations? What happens to such a value?
3. One pass through the loop.
4. Two passes through the loop.
5. A typical number of iterations, unless covered by a previous test.
Kinds of Loops: There are only three kinds of loops with respect to path testing:
Nested Loops:
The number of tests to be performed on nested loops will be the exponent of the tests performed on single
loops.
As we cannot always afford to test all combinations of nested loops' iterations values. Here's a tactic used to
discard some of these values:
1. Start at the inner most loop. Set all the outer loops to their minimum values.
2. Test the minimum, minimum+1, typical, maximum-1 , and maximum for the innermost loop, while holding
the outer loops at their minimum iteration parameter values. Expand the tests as required for out of range
and excluded values.
3. If you've done the outmost loop, GOTO step 5, else move out one loop and set it up as in step 2 with all
other loops set to typical values.
4. Continue outward in this manner until all loops have been covered.
5. Do all the cases for all loops in the nest simultaneously.
Concatenated Loops:
Concatenated loops fall between single and nested loops with respect to test cases. Two loops are concatenated
if it's possible to reach one after exiting the other while still on a path from entrance to exit.
If the loops cannot be on the same path, then they are not concatenated and can be treated as individual loops.
Horrible Loops:
A horrible loop is a combination of nested loops, the use of code that jumps into and out of loops, intersecting
loops, hidden loops, and cross connected loops.
Makes iteration value selection for test cases an awesome and ugly task, which is another reason such
structures should be avoided.
10
Predicate: The logical function evaluated at a decision is called Predicate. The direction taken at a decision
depends on the value of decision variable. Some examples are: A>0, x+y>=90.......
Path Predicate: A predicate associated with a path is called a Path Predicate. For example, "x is greater than
zero", " x+ y>=90", "w is either negative or equal to 10 is true" is a sequence of predicates whose truth values
will cause the routine to take a specific path.
Multi way Branches:
o The path taken through a multi way branch such as a computed GOTO's, case statement, or jump
tables cannot be directly expressed in TRUE/FALSE terms.
o Although, it is possible to describe such alternatives by using multi valued logic, an expedient (practical
approach) is to express multi way branches as an equivalent set of if..then. else statements.
o For example a three way case statement can be written as: If case=1 DO A1 ELSE (IF Case=2 DO A2 ELSE
DO A3 ENDIF)ENDIF.
Inputs:
o In testing, the word input is not restricted to direct inputs, such as variables in a subroutine call, but
includes all data objects referenced by the routine whose values are fixed prior to entering it.
o For example, inputs in a calling sequence, objects in a data structure, values left in registers, or any
combination of object types.
o The input for a particular test is mapped as a one dimensional array called as an Input Vector.
Predicate Expressions
Predicate interpretation:
o The simplest predicate depends only on input variables.
o For example if x1,x2 are inputs, the predicate might be x1+x2>=7, given the values of x1 and x2 the
direction taken through the decision is based on the predicate is determined at input time and does not
depend on processing.
o Another example, assume a predicate x1+y>=0 that along a path prior to reaching this predicate we
had the assignment statement y=x2+7. although our predicate depends on processing, we can
substitute the symbolic expression for y to obtain an equivalent predicate x1+x2+7>=0.
o The act of symbolic substitution of operations along the path in order to express the predicate solely in
terms of the input vector is called predicate interpretation.
Some times the interpretation may depend on the path; for example,
INPUT X
ON X GOTO A, B, C, ...
A: Z := 7 @ GOTO HEM
B: Z := -7 @ GOTO HEM
C: Z := 0 @ GOTO HEM
.........
HEM: DO SOMETHING
.........
HEN: IF Y + Z > 0 GOTO ELL ELSE GOTO EMM
The predicate interpretation at HEN depends on the path we took through the first multi way branch. It
yields for the three cases respectively, if Y+7>0, Y-7>0, Y>0.
o The path predicates are the specific form of the predicates of the decisions along the selected path
after interpretation.
11
ABCD+EBCD= (A+E)BCD
Predicate coverage:
o Compound Predicate: Predicates of the form A OR B, A AND B and more complicated boolean
expressions are called as compound predicates.
o Some times even a simple predicate becomes compound after interpretation. Example: the predicate if
(x=17) whose opposite branch is if x.NE.17 which is equivalent to x>17 . Or. X<17.
o Predicate coverage is being the achieving of all possible combinations of truth values corresponding to
the selected path have been explored under some test.
o As achieving the desired direction at a given decision could still hide bugs in the associated predicates.
Testing blindness:
o Testing Blindness is a pathological (harmful) situation in which the desired path is achieved for the
wrong reason.
12
For Example:
Correct Buggy
X=7 X=7
........ ........
if Y > 0 then ... if X+Y > 0 then ...
If the test case sets Y=1 the desired path is taken in either case, but there is still a bug.
2. Equality Blindness:
Equality blindness occurs when the path selected by a prior predicate results in a value that
works both for the correct and buggy predicate.
For Example:
Correct Buggy
if Y = 2 then if Y = 2 then
........ ........
if X+Y > 3 then ... if X > 1 then ...
The first predicate if y=2 forces the rest of the path, so that for any positive value of x. the path
taken at the second predicate will be the same for the correct and buggy version.
3. Self Blindness:
Self blindness occurs when the buggy predicate is a multiple of the correct predicate and as a
result is indistinguishable along that path.
For Example:
Correct Buggy
X=A X=A
........ ........
if X-1 > 0 then ... if X+A-2 > 0 then ...
The assignment (x=a) makes the predicates multiples of each other, so the direction taken is
the same for the correct and buggy version.
PATH SENSITIZING:
13
o For any path in that set, interpret the predicates along the path as needed to express them in terms of the
input vector. In general individual predicates are compound or may become compound as a result of
interpretation.
o Trace the path through, multiplying the individual compound predicates to achieve a boolean expression
such as
(A+BC) (D+E) (FGH) (IJ) (K) (l) (L).
o Multiply out the expression to achieve a sum of products form:
ADFGHIJKL+AEFGHIJKL+BCDFGHIJKL+BCEFGHIJKL
o Each product term denotes a set of inequalities that if solved will yield an input vector that will drive the
routine along the designated path.
o Solve any one of the inequality sets for the chosen path and you have found a set of input values for the
path.
o If you can find a solution, then the path is achievable.
o If you cant find a solution to any of the sets of inequalities, the path is un achievable.
o The act of finding a set of solutions to the path predicate expression is called PATH SENSITIZATION.
PATH INSTRUMENTATION:
Path Instrumentation:
o Path instrumentation is what we have to do to confirm that the outcome was achieved by the intended
path.
o Co-incidental Correctness: The coincidental correctness stands for achieving the desired outcome for
wrong
reason.
14
The above figure is an example of a routine that, for the (unfortunately) chosen input value (X = 16), yields the
same outcome (Y = 2) no matter which case we select. Therefore, the tests chosen this way will not tell us
whether we have achieved coverage. For example, the five cases could be totally jumbled and still the outcome
would be the same. Path Instrumentation is what we have to do to confirm that the outcome was achieved by
the intended path.
o The types of instrumentation methods include:
1. Interpretive Trace Program:
An interpretive trace program is one that executes every statement in order and records the
intermediate values of all calculations, the statement labels traversed etc.
If we run the tested routine under a trace, then we have all the information we need to confirm the
outcome and, furthermore, to confirm that it was achieved by the intended path.
The trouble with traces is that they give us far more information than we need. In fact, the typical
trace program provides so much information that confirming the path from its massive output dump is
more work than simulating the computer by hand to confirm the path.
2. Traversal Marker or Link Marker:
A simple and effective form of instrumentation is called a traversal marker or link marker.
Name every link by a lower case letter.
Instrument the links so that the link's name is recorded when the link is executed.
The succession of letters produced in going from the routine's entry to its exit should, if there are no
bugs, exactly correspond to the path name.
Why Single Link Markers aren't enough: Unfortunately, a single link marker may not do the trick
because links can be chewed by open bugs.
15
We intended to traverse the ikm path, but because of a rampaging GOTO in the middle of the m link, we
go to process B. If coincidental correctness is against us, the outcomes will be the same and we won't
know about the bug.
4. Link Counter: A less disruptive (and less informative) instrumentation method is based on counters.
Instead of a unique link name to be pushed into a string when the link is traversed, we simply increment
a link counter. We now confirm that the path length is as expected. The same problem that led us to
double link markers also leads us to double link counters.
16
1. The goal of testing is to detect errors in a 1. The goal of debugging is to detect errors
program. and correct them.
2. Testing is initiated with known 2. Debugging is initiated with unknown
conditions. conditions.
3. The output can be expected. 3. The output cannot be expected.
4. It is necessary to have planned, designed 4. In debugging, it is not necessary to have
and scheduled constraints. these constraints.
5. Testing finds the reason for program‟s 5. Debugging is the programmer‟s
failure. capability.
6. It is not necessary to have design 6. It is sufficient to have detailed design
knowledge while performing testing. knowledge for debugging.
7. Testing is performed by a person who 7. Debugging should be done by a person
does not belong to the company. who belongs to the company.
SMALL LARGE
1. Small programs have only few lines 1. Large programs have number of lines
of code. of code.
2. They consist of few components. 2. They consist of large components.
3. Small programs do not require any 3. They require different types of
technique for testing. techniques for testing.
4. Small programs are more efficient. 4. Large programs are less efficient.
5. Small program are written by a 5. Large programs are written by
single programmer. different programmers.
6. The quality of small program is high 6. The quality of large program is low
compared to that of a high compared when compared to small program.
to that of a large component.
DECISIONS CASES
NO YES
If 2
a>b 1 3
ELSE DO THEN DO
Fig (1): TWO WAY DECISIONS. CASE 1 CASE 2 CASE3
12) What are the differences between Control flow graphs and flowcharts?
CONTROL FLOWGRAPHS FLOW CHARTS
1. In control flow graphs we don‟t show 1. In flow chart, each and every statement
the details of the process in the process of the process will be shown separately.
in the block.
2. Here total statements are shown as 2. Every statement is shown. If the
single process. There is no need to process contains 100 steps / statements,
know how many statements are in it. they must be shown in 100 process
blocks.
3. Flow graph representation is easy to 3. Flowchart creates confusion in control
understand. flow.
4. Now-a-days control flow graphs are 4. Now-a-days flow charts are not used in
used more in developments. many purposes.
Entry A C D E B Exit.
There are 2 different paths from an entry (A) to an exit (B). They are ACDEB and ACDB respectively.
Though both paths are simple, the most obvious among the 2 is ACBD because it is shortest
path between an entry and an exit.
14) Define Node and Link.
Nodes are the graphical representation of real world
objects. Nodes are mainly denoted by small circles.
A node which has more than one input link is known as Junction. And a node which has more than one
output link is known as Decision.
Nodes can be labeled by an alphabets (or) numbers.
Example:
A C E
Entry D F B Exit
In the above fig, A, B, C, D, E, F are nodes. D is the decision which has 2 output links and F is a junction which
has 2 input links.
LINKS:
A link is mediator for any two nodes. Link can be denoted by an “arrow” can be represented by lower
case letters. f
a b c E d F e B
A C D
UNIT-I
QB1
SOFTWARE TESTING
UNIT II
TRANSACTION FLOWS:
INTRODUCTION:
• A transaction is a unit of work seen from a system user's point of view.
• A transaction consists of a sequence of operations, some of which are performed by a system, persons or
devices that are outside of the system.
• Transaction begins with Birth-that is they are created as a result of some external act.
• At the conclusion of the transaction's processing, the transaction is no longer in the system.
• Example of a transaction: A transaction for an online information retrieval system might consist of the
following steps or tasks:
o Accept input (tentative birth)
o Validate input (birth)
o Transmit acknowledgement to requester
o Do input processing
o Search file
o Request directions from user
o Accept input
o Validate input
o Process request
o Update file
o Transmit output
o Record transaction in log and clean up (death)
TRANSACTION FLOW GRAPHS:
• Transaction flows are introduced as a representation of a system's processing.
• An example of a Transaction Flow is as follows:
• The methods that were applied to control flow graphs are then used for functional testing.
• Transaction flows and transaction flow testing are to the independent system tester what control flows are
path testing is to the programmer.
• The transaction flow graph is to create a behavioral model of the program that leads to functional testing.
• The transaction flow graph is a model of the structure of the system's behavior (functionality).
USAGE:
• Transaction flows are indispensable for specifying requirements of complicated systems, especially online
systems.
• A big system such as an air traffic control or airline reservation system, has not hundreds, but thousands of
different transaction flows.
• The flows are represented by relatively simple flow graphs, many of which have a single straight-through path.
• Loops are infrequent compared to control flow graphs.
• The most common loop is used to request a retry after user input errors. An ATM system, for example, allows
the user to try, say three times, and will take the card away the fourth time.
COMPLICATIONS:
• In simple cases, the transactions have a unique identity from the time they're created to the time they're
completed.
• In many systems the transactions can give birth to others, and transactions can also merge.
• Births: There are three different possible interpretations of the decision symbol, or nodes with two or more out
links. It can be a Decision, Biosis or a Mitosis.
(a) Decision: Here the transaction will take one alternative or the other alternative but not both.
(b) Biosis: Here the incoming transaction gives birth to a new transaction, and both transactions continue
on their separate paths, and the parent retains it identity.
(c) Mitosis: Here the parent transaction is destroyed and two new transactions are created.
We have no problem with ordinary decisions and junctions. Births, absorptions, and conjugations are as
problematic for the software designer as they are for the software modeler and the test designer; as a
consequence, such points have more than their share of bugs. The common problems are: lost daughters,
wrongful deaths, and illegitimate births.
• Data flow testing is the name given to a family of test strategies based on selecting paths through the
program's control flow in order to explore sequences of events related to the status of data objects.
• For example, pick enough paths to assure that every data object has been initialized prior to use or that all
defined objects have been used for something.
• Motivation:
it is our belief that, just as one would not feel confident about a program without executing every statement in it as
part of some test, one should not feel confident about a program without having seen the effect of using the value
produced by each and every computation.
• There are two types of data flow machines with different architectures. (1) Von Neumann machnes (2) Multi-
instruction, multi-data machines (MIMD).
• Von Neumann Machine Architecture:
o Most computers today are von-neumann machines.
o This architecture features interchangeable storage of instructions and data in the same memory units.
o The Von Neumann machine Architecture executes one instruction at a time in the following, micro
instruction sequence:
1. Fetch instruction from memory
2. Interpret instruction
3. Fetch operands
4. Process or Execute
5. Store result
6. Increment program counter
7. GOTO 1
• Multi-instruction, Multi-data machines (MIMD) Architecture:
o These machines can fetch several instructions and objects in parallel.
o They can also do arithmetic and logical operations simultaneously on different data objects.
o The decision of how to sequence them depends on the compiler.
Bug assumption:
• The bug assumption for data-flow testing strategies is that control flow is generally correct and that something
has gone wrong with the software so that data objects are not available when they should be, or silly things are
being done to data objects.
• Also, if there is a control-flow problem, we expect it to have symptoms that can be detected by data-flow
analysis.
• Although we'll be doing data-flow testing, we won't be using data flow graphs as such. Rather, we'll use an
ordinary control flow graph annotated to show what happens to the data objects of interest at the moment.
• The data flow graph is a graph consisting of nodes and directed links.
• We will use an control graph to show what happens to data objects of interest at that moment.
• Our objective is to expose deviations between the data flows we have and the data flows we want.
• Data Object State and Usage:
o Data Objects can be created, killed and used.
o They can be used in two distinct ways: (1) In a Calculation (2) As a part of a Control Flow Predicate.
o The following symbols denote these possibilities:
1. Defined: d - defined, created, initialized etc
2. Killed or undefined: k - killed, undefined, released etc
3. Usage: u - used for something (c - used in Calculations, p - used in a predicate)
o 1. Defined (d):
▪ An object is defined explicitly when it appears in a data declaration.
▪ Or implicitly when it appears on the left hand side of the assignment.
▪ It is also to be used to mean that a file has been opened.
▪ A dynamically allocated object has been allocated.
▪ Something is pushed on to the stack.
▪ A record written.
o 2. Killed or Undefined (k):
▪ An object is killed on undefined when it is released or otherwise made unavailable.
▪ When its contents are no longer known with certitude (with aboslute certainity / perfectness).
▪ Release of dynamically allocated objects back to the availability pool.
▪ Return of records.
▪ The old top of the stack after it is popped.
▪ An assignment statement can kill and redefine immediately. For example, if A had been
previously defined and we do a new assignment such as A : = 17, we have killed A's previous
value and redefined A
o 3. Usage (u):
▪ A variable is used for computation (c) when it appears on the right hand side of an assignment
statement.
▪ A file record is read or written.
▪ It is used in a Predicate (p) when it appears directly in a predicate.
• Data flow anomaly model prescribes that an object can be in one of four distinct states:
1. K :- undefined, previously killed, does not exist
2. D :- defined but not yet used for anything
3. U :- has been used for computation or in predicate
4. A :- anomalous
• These capital letters (K,D,U,A) denote the state of the variable and should not be confused with the program
action, denoted by lower case letters.
• Unforgiving Data - Flow Anomaly Flow Graph: Unforgiving model, in which once a variable becomes anomalous
it can never return to a state of grace.
..
Assume that the variable starts in the K state - that is, it has not been defined or does not exist. If an attempt is made to
use it or to kill it (e.g., say that we're talking about opening, closing, and using files and that 'killing' means closing), the
object's state becomes anomalous (state A) and, once it is anomalous, no action can return the variable to a working
state. If it is defined (d), it goes into the D, or defined but not yet used, state. If it has been defined (D) and redefined
(d) or killed without use (k), it becomes anomalous, while usage (u) brings it to the U state. If in U, redefinition (d)
brings it to D, u keeps it in U, and k kills it.
Forgiving Data - Flow Anomaly Flow Graph: Forgiving model is an alternate model where redemption (recover) from
the anomalous state is possible.
• This graph has three normal and three anomalous states and he considers the kk sequence not to be
anomalous. The difference between this state graph and Figure 3.5 is that redemption is possible. A proper
action from any of the three anomalous states returns the variable to a useful working state.
The point of showing you this alternative anomaly state graph is to demonstrate that the specifics of an
anomaly depends on such things as language, application, context, or even your frame of mind. In principle,
you must create a new definition of data flow anomaly (e.g., a new state graph) in each situation. You must at
least verify that the anomaly definition behind the theory or imbedded in a data flow anomaly test tool is
appropriate to your situation.
• Here we annotate each link with symbols (for example, d, k, u, c, p) or sequences of symbols (for example, dd,
du, ddd) that denote the sequence of data operations on that link with respect to the variable of interest. Such
annotations are called link weights.
• The control flow graph structure is same for every variable: it is the weights that change.
• Components of the model:
1. To every statement there is a node, whose name is unique. Every node has at least one out link and at
least one in link except for exit nodes and entry nodes.
2. Exit nodes are dummy nodes placed at the outgoing arrowheads of exit statements (e.g., END,
RETURN), to complete the graph. Similarly, entry nodes are dummy nodes placed at entry statements
(e.g., BEGIN) for the same reason.
3. The out link of simple statements (statements with only one out link) are weighted by the proper
sequence of data-flow actions for that statement. Note that the sequence can consist of more than one
letter. For example, the assignment statement A:= A + B in most languages is weighted by cd or
possibly ckd for variable A. Languages that permit multiple simultaneous assignments and/or
compound statements can have anomalies within the statement. The sequence must correspond to the
order in which the object code will be executed for that variable.
4. Predicate nodes (e.g., IF-THEN-ELSE, DO WHILE, CASE) are weighted with the p - use(s) on every out
link, appropriate to that out link.
5. Every sequence of simple statements (e.g., a sequence of nodes with one in link and one out link) can
be replaced by a pair of nodes that has, as weights on the link between them, the concatenation of link
weights.
6. If there are several data-flow actions on a given link for a given variable, then the weight of the link is
denoted by the sequence of actions on that link for that variable.
7. Conversely, a link with several data-flow actions on it can be replaced by a succession of equivalent
links, each of which has at most one data-flow action for any variable.
Introduction:
10
• A strategy X is stronger than another strategy Y if all test cases produced under Y are included in those
produced under X - conversely for weaker.
Terminology:
1. Definition-Clear Path Segment, with respect to variable X, is a connected sequence of links such that X is
(possibly) defined on the first link and not redefined or killed on any subsequent link of that path segment. ll
paths in Figure 3.9 are definition clear because variables X and Y are defined only on the first link (1,3) and not
thereafter. In Figure 3.10, we have a more complicated situation. The following path segments are definition-
clear: (1,3,4), (1,3,5), (5,6,7,4), (7,8,9,6,7), (7,8,9,10), (7,8,10), (7,8,10,11). Subpath (1,3,4,5) is not definition-
clear because the variable is defined on (1,3) and again on (4,5). For practice, try finding all the definition-clear
subpaths for this routine (i.e., for all variables).
2. Loop-Free Path Segment is a path segment for which every node in it is visited atmost once. For Example, path
(4,5,6,7,8,10) in Figure 3.10 is loop free, but path (10,11,4,5,6,7,8,10,11,12) is not because nodes 10 and 11 are
each visited twice.
3. Simple path segment is a path segment in which at most one node is visited twice. For example, in Figure 3.10,
(7,4,5,6,7) is a simple path segment. A simple path segment is either loop-free or if there is a loop, only one
node is involved.
4. A du path from node i to k is a path segment such that if the last link has a computational use of X, then the
path is simple and definition-clear; if the penultimate (last but one) node is j - that is, the path is
(i,p,q,...,r,s,t,j,k) and link (j,k) has a predicate use - then the path from i to j is both loop-free and definition-
clear.
Strategies: The structural test strategies discussed below are based on the program's control flow graph. They differ in
the extent to which predicate uses and/or computational uses of variables are included in the test set. Various types of
data flow testing strategies in decreasing order of their effectiveness are:
• All - du Paths (ADUP): The all-du-paths (ADUP) strategy is the strongest data-flow testing strategy discussed
here. It requires that every du path from every definition of every variable to every use of that definition be
exercised under some test.
For variable X and Y:In Figure 3.9, because variables X and Y are used only on link (1,3), any test that starts at
the entry satisfies this criterion (for variables X and Y, but not for all variables as required by the strategy).
For variable Z: The situation for variable Z (Figure 3.10) is more complicated because the variable is redefined in
many places. For the definition on link (1,3) we must exercise paths that include subpaths (1,3,4) and (1,3,5).
The definition on link (4,5) is covered by any path that includes (5,6), such as subpath (1,3,4,5,6, ...). The (5,6)
definition requires paths that include subpaths (5,6,7,4) and (5,6,7,8).
For variable V: Variable V (Figure 3.11) is defined only once on link (1,3). Because V has a predicate use at node
12 and the subsequent path to the end must be forced for both directions at node 12, the all-du-paths strategy
for this variable requires that we exercise all loop-free entry/exit paths and at least one path that includes the
loop caused by (11,4). Note that we must test paths that include both subpaths (3,4,5) and (3,5) even though
neither of these has V definitions. They must be included because they provide alternate du paths to the V use
on link (5,6). Although (7,4) is not used in the test set for variable V, it will be included in the test set that covers
the predicate uses of array variable V() and U.
The all-du-paths strategy is a strong criterion, but it does not take as many tests as it might seem at first
because any one test simultaneously satisfies the criterion for several definitions and uses of several different
variables.
11
• All Uses Strategy (AU):The all uses strategy is that at least one definition clear path from every definition of
every variable to every use of that definition be exercised under some test. Just as we reduced our ambitions
by stepping down from all paths (P) to branch coverage (C2), say, we can reduce the number of test cases by
asking that the test set should include at least one path segment from every definition to every use that can be
reached by that definition.
For variable V: In Figure 3.11, ADUP requires that we include subpaths (3,4,5) and (3,5) in some test because
subsequent uses of V, such as on link (5,6), can be reached by either alternative. In AU either (3,4,5) or (3,5) can
be used to start paths, but we don't have to use both. Similarly, we can skip the (8,10) link if we've included the
(8,9,10) subpath. Note the hole. We must include (8,9,10) in some test cases because that's the only way to
reach the c use at link (9,10) - but suppose our bug for variable V is on link (8,10) after all? Find a covering set of
paths under AU for Figure 3.11.
• All p-uses/some c-uses strategy (APU+C) : For every variable and every definition of that variable, include at
least one definition free path from the definition to every predicate use; if there are definitions of the variables
that are not covered by the above prescription, then add computational use test cases as required to cover
every definition.
For variable Z:In Figure 3.10, for APU+C we can select paths that all take the upper link (12,13) and therefore
we do not cover the c-use of Z: but that's okay according to the strategy's definition because every definition is
covered. Links (1,3), (4,5), (5,6), and (7,8) must be included because they contain definitions for variable Z. Links
(3,4), (3,5), (8,9), (8,10), (9,6), and (9,10) must be included because they contain predicate uses of Z. Find a
covering set of test cases under APU+C for all variables in this example - it only takes two tests.
For variable V:In Figure 3.11, APU+C is achieved for V by (1,3,5,6,7,8,10,11,4,5,6,7,8,10,11,12[upper], 13,2) and
(1,3,5,6,7,8,10,11,12[lower], 13,2). Note that the c-use at (9,10) need not be included under the APU+C
criterion.
• All c-uses/some p-uses strategy (ACU+P) : The all c-uses/some p-uses strategy (ACU+P) is to first ensure
coverage by computational use cases and if any definition is not covered by the previously selected paths, add
such predicate use cases as are needed to assure that every definition is included in some test.
For variable Z: In Figure 3.10, ACU+P coverage is achieved for Z by path (1,3,4,5,6,7,8,10, 11,12,13[lower], 2),
but the predicate uses of several definitions are not covered. Specifically, the (1,3) definition is not covered for
the (3,5) p-use, the (7,8) definition is not covered for the (8,9), (9,6) and (9, 10) p-uses.
The above examples imply that APU+C is stronger than branch coverage but ACU+P may be weaker than, or
incomparable to, branch coverage.
• All Definitions Strategy (AD) : The all definitions strategy asks only every definition of every variable be
covered by atleast one use of that variable, be that use a computational use or a predicate use.
For variable Z: Path (1,3,4,5,6,7,8, . . .) satisfies this criterion for variable Z, whereas any entry/exit path satisfies
it for variable V.
From the definition of this strategy we would expect it to be weaker than both ACU+P and APU+C.
• All Predicate Uses (APU), All Computational Uses (ACU) Strategies : The all predicate uses strategy is derived
from APU+C strategy by dropping the requirement that we include a c-use for the variable if there are no p-
uses for the variable. The all computational uses strategy is derived from ACU+P strategy by dropping the
requirement that we include a p-use for the variable if there are no c-uses for the variable.
It is intuitively obvious that ACU should be weaker than ACU+P and that APU should be weaker than APU+C.
Ordering the strategies:
• Figure compares path-flow and data-flow testing strategies. The arrows denote that the strategy at the arrow's
tail is stronger than the strategy at the arrow's head.
12
• The right-hand side of this graph, along the path from "all paths" to "all statements" is the more interesting
hierarchy for practical applications.
• Note that although ACU+P is stronger than ACU, both are incomparable to the predicate-biased strategies.
Note also that "all definitions" is not comparable to ACU or APU.
• A (static) program slice is a part of a program (e.g., a selected set of statements) defined with respect to a given
variable X (where X is a simple variable or a data vector) and a statement i: it is the set of all statements that
could (potentially, under static analysis) affect the value of X at statement i - where the influence of a faulty
statement could result from an improper computational use or predicate use of some other variables at prior
statements.
• If X is incorrect at statement i, it follows that the bug must be in the program slice for X with respect to i
• A program dice is a part of a slice in which all statements which are known to be correct have been removed.
• In other words, a dice is obtained from a slice by incorporating information obtained through testing or
experiment (e.g., debugging).
• The debugger first limits her scope to those prior statements that could have caused the faulty value at
statement i (the slice) and then eliminates from further consideration those statements that testing has shown
to be correct.
• Debugging can be modeled as an iterative procedure in which slices are further refined by dicing, where the
dicing information is obtained from ad hoc tests aimed primarily at eliminating possibilities. Debugging ends
when the dice has been reduced to the one faulty statement.
• Dynamic slicing is a refinement of static slicing in which only statements on achievable paths to the statement
in question are included.
13
Z=0;
Z=A+B;
From the above example, we notice that object „Z‟ is defined twice to zero, but not used. Here an
anomaly occurs.
5) What are the different states of data objects?
The data flow anomaly defines an object to be in one of the following different states. The states are:
S.NO SYMBOL STATES
1. K undefined, killed
2. D defined
3. U used for computation/ calculation or predicate.
4. A anomalous
The two transactions, transaction1 and transaction2 have a different or same identity.
Transaction 1 Transaction 2
Fig: Decision
In the third transaction, the parent transaction is destroyed and two new transactions are created of same identity.
This situation is called MITOSIS. parent
UNIT-II
1. What is a transaction?
2. What are the applications of transaction flows? (Nov-2018)
3. Write about inspections? Define data flow anomaly with an example?
4. What are the different states of data objects? Write about Von Neumann
Machines.
5. What are Transaction Flow Junctions? (Nov-2018 , June-2017)
6. What are Transaction Flow Mergers? (Nov-2016)
QB2
SOFTWARE TESTING
UNIT III
Domain Testing: Domains and Paths, Nice & Ugly Domains, Domain testing, Domains and Interfaces Testing, Domains
and Testability.
• Domain: In mathematics, domain is a set of possible values of an independent variable or the variables of a
function.
• Programs as input data classifiers: domain testing attempts to determine whether the classification is or is not
correct.
• Domain testing can be based on specifications or equivalent implementation information.
• If domain testing is based on specifications, it is a functional test technique.
• If domain testing is based implementation details, it is a structural test technique.
• For example, you're doing domain testing when you check extreme values of an input variable.
All inputs to a program can be considered as if they are numbers. For example, a character string can be treated as a
number by concatenating bits and looking at them as if they were a binary integer. This is the view in domain testing,
which is why this strategy has a mathematical flavor.
The Model: The following figure is a schematic representation of domain testing.
• Before doing whatever it does, a routine must classify the input and set it moving on the right path.
• An invalid input (e.g., value too big) is just a special processing case called 'reject'.
• The input then passes to a hypothetical subroutine rather than on calculations.
• In domain testing, we focus on the classification aspect of the routine rather than on the calculations.
• Structural knowledge is not needed for this model - only a consistent, complete specification of input values for
each case.
• We can infer that for each case there must be at least one path to process that case.
A domain is a set:
• An input domain is a set.
• If the source language supports set definitions (E.g. PASCAL set types and C enumerated types) less testing is
needed because the compiler does much of it for us.
• Domain testing does not work well with arbitrary discrete sets of data objects.
• Domain for a loop-free program corresponds to a set of numbers defined over the input vector.
• If domain testing is applied to structure, then predicate interpretation must be based on actual paths through
the routine - that is, based on the implementation control flow graph.
• Conversely, if domain testing is applied to specifications, interpretation is based on a specified data flow graph
for the routine; but usually, as is the nature of specifications, no interpretation is needed because the domains
are specified directly.
• For every domain, there is at least one path through the routine.
• There may be more than one path if the domain consists of disconnected parts or if the domain is defined by
the union of two or more domains.
• Domains are defined their boundaries. Domain boundaries are also where most domain bugs occur.
• For every boundary there is at least one predicate that specifies what numbers belong to the domain and what
numbers don’t.
For example, in the statement IF x>0 THEN ALPHA ELSE BETA we know that numbers greater than zero belong
to ALPHA processing domain(s) while zero and smaller numbers belong to BETA domain(s).
• A domain may have one or more boundaries - no matter how many variables define it.
For example, if the predicate is x2 + y2 < 16, the domain is the inside of a circle of radius 4 about the origin.
Similarly, we could define a spherical domain with one boundary but in three variables.
• Domains are usually defined by many boundary segments and therefore by many predicates. i.e. the set of
interpreted predicates traversed on that path (i.e., the path's predicate expression) defines the domain's
boundaries.
A domain closure:
• A domain boundary is closed with respect to a domain if the points on the boundary belong to the domain.
• If the boundary points belong to some other domain, the boundary is said to be open.
• Figure 4.2 shows three situations for a one-dimensional domain - i.e., a domain defined over one input variable;
call it x
• The importance of domain closure is that incorrect closure bugs are frequent domain bugs. For example, x >= 0
when x > 0 was intended.
Domain dimensionality:
• Every input variable adds one dimension to the domain.
• One variable defines domains on a number line.
• Two variables define planar domains.
• Three variables define solid domains.
• Every new predicate slices through previously defined domains and cuts them in half.
• Every boundary slices through the input vector space with a dimensionality which is less than the
dimensionality of the space.
• Thus, planes are cut by lines and points, volumes by planes, lines and points and n-spaces by hyperplanes.
Bug assumption:
• The bug assumption for the domain testing is that processing is okay but the domain definition is wrong.
• An incorrectly implemented domain means that boundaries are wrong, which may in turn mean that control
flow predicates are wrong.
• Many different bugs can result in domain errors. Some of them are:
Domain Errors:
o Double Zero Representation: In computers or Languages that have a distinct positive and negative
zero, boundary errors for negative zero are common.
o Floating point zero check: A floating point number can equal zero only if the previous definition of that
number set it to zero or if it is subtracted from it self or multiplied by zero. So the floating point zero
check to be done against a epsilon value.
o Contradictory domains: An implemented domain can never be ambiguous or contradictory, but a
specified domain can. A contradictory domain specification means that at least two supposedly distinct
domains overlap.
o Ambiguous domains: Ambiguous domains means that union of the domains is incomplete. That is
there are missing domains or holes in the specified domains. Not specifying what happens to points on
the domain boundary is a common ambiguity.
o Over specified Domains: The domain can be overloaded with so many conditions that the result is a
null domain. Another way to put it is to say that the domain's path is unachievable.
o Boundary Errors: Errors caused in and around the boundary of a domain. Example, boundary closure
bug, shifted, tilted, missing, extra boundary.
o Closure Reversal: A common bug. The predicate is defined in terms of >=. The programmer chooses to
implement the logical complement and incorrectly uses <= for the new predicate; i.e., x >= 0 is
incorrectly negated as x <= 0, thereby shifting boundary values to adjacent domains.
o Faulty Logic :Compound predicates (especially) are subject to faulty logic transformations and
improper simplification. If the predicates define domain boundaries, all kinds of domain bugs can result
from faulty logic manipulations.
• Functional Homogeneity of Bugs: Whatever the bug is, it will not change the functional form of the boundary
predicate. For example, if the predicate is ax >= b, the bug will be in the value of a or b but it will not change
the predicate to ax >= b, say.
• Linear Vector Space: Most papers on domain testing, assume linear boundaries - not a bad assumption
because in practice most boundary predicates are linear.
• Loop Free Software: Loops are problematic for domain testing. The trouble with loops is that each iteration can
result in a different predicate expression (after interpretation), which means a possible domain boundary
change.
• Such boundaries can come about because the path that hypothetically corresponds to them is unachievable,
because inputs are constrained in such a way that such values can't exist, because of compound predicates that
define a single boundary, or because redundant predicates convert such boundary values into a null set.
• The advantage of complete boundaries is that one set of tests is needed to confirm the boundary no matter
how many domains it bounds.
• If the boundary is chopped up and has holes in it, then every segment of that boundary must be tested for
every domain it bounds.
Systematic boundaries:
• Systematic boundary means that boundary inequalities related by a simple function such as a constant.
• In Figure, the domain boundaries for u and v differ only by a constant. We want relations such as
where fi is an arbitrary linear function, X is the input vector, ki and c are constants, and g(i,c) is a decent
function over i and c that yields a constant, such as k + ic.
• The first example is a set of parallel lines, and the second example is a set of systematically (e.g., equally)
spaced parallel lines (such as the spokes of a wheel, if equally spaced in angles, systematic).
• If the boundaries are systematic and if you have one tied down and generate tests for it, the tests for the rest
of the boundaries in that set can be automatically generated.
Orthogonal boundaries:
• Two boundary sets U and V are said to be orthogonal if every inequality in V is perpendicular to every
inequality in U.
• If two boundary sets are orthogonal, then they can be tested independently
• In Figure, we have six boundaries in U and four in V. We can confirm the boundary properties in a number of
tests proportional to 6 + 4 = 10 (O(n)). If we tilt the boundaries to get Figure, we must now test the
intersections. We've gone from a linear number of cases to a quadratic: from O(n) to O(n2).
Actually, there are two different but related orthogonality conditions. Sets of boundaries can be orthogonal to
one another but not orthogonal to the coordinate axes (condition 1), or boundaries can be orthogonal to the
coordinate axes (condition 2).
Closure consistency:
• Figure shows another desirable domain property: boundary closures are consistent and systematic.
• The shaded areas on the boundary denote that the boundary belongs to the domain in which the shading lies -
e.g., the boundary lines belong to the domains on the right.
• Consistent closure means that there is a simple pattern to the closures - for example, using the same relational
operator for all boundaries of a set of parallel boundaries.
Convex:
• A geometric figure (in any number of dimensions) is convex if you can take two arbitrary points on any two
different boundaries, join them by a line and all points on that line lie within the figure.
• Nice domains are convex; dirty domains aren't.
• You can smell a suspected concavity when you see phrases such as: ". . . except if . . .," "However . . .," ". . . but
not. ..... " In programming, it's often the buts in the specification that kill you.
Simply connected:
• Nice domains are simply connected; that is, they are in one piece rather than pieces all over the place
interspersed with other domains.
• Simple connectivity is a weaker requirement than convexity; if a domain is convex it is simply connected, but
not vice versa.
• Consider domain boundaries defined by a compound predicate of the (boolean) form ABC. Say that the input
space is divided into two domains, one defined by ABC and, therefore, the other defined by its negation .
• For example, suppose we define valid numbers as those lying between 10 and 17 inclusive. The invalid numbers
are the disconnected domain consisting of numbers less than 10 and greater than 17.
• Simple connectivity, especially for default cases, may be impossible.
Ugly domains:
• Some domains are born ugly and some are uglified by bad specifications.
• Every simplification of ugly domains by programmers can be either good or bad.
• Programmers in search of nice solutions will "simplify" essential complexity out of existence. Testers in search
of brilliant insights will be blind to essential complexity and therefore miss important cases.
• If the ugliness results from bad specifications and the programmer's simplification is harmless, then the
programmer has made ugly good.
• But if the domain's complexity is essential (e.g., the income tax code), such "simplifications" constitute bugs.
• Nonlinear boundaries are so rare in ordinary programming that there's no information on how programmers
might "correct" such boundaries if they're essential.
Ambiguities and contradictions:
• Domain ambiguities are holes in the input space.
• The holes may lie within the domains or in cracks between domains.
• Two kinds of contradictions are possible: overlapped domain specifications and overlapped closure
specifications
Simplifying the topology:
• The programmer's and tester's reaction to complex domains is the same - simplify
• There are three generic cases: concavities, holes and disconnected pieces.
• Programmers introduce bugs and testers mis design test cases by: smoothing out concavities (Figure 4.8a),
filling in holes (Figure 4.8b), and joining disconnected pieces.
• If domain boundaries are parallel but have closures that go every which way (left, right, left, . . .) the natural
reaction is to make closures go the same way.
DOMAIN TESTING:
Domain Testing Strategy: The domain-testing strategy is simple, although possibly tedious (slow).
1. Domains are defined by their boundaries; therefore, domain testing concentrates test points on or near
boundaries.
2. Classify what can go wrong with boundaries, then define a test strategy for each case. Pick enough points to
test for all recognized kinds of boundary errors.
3. Because every boundary serves at least two different domains, test points used to check one domain can also
be used to check adjacent domains. Remove redundant test points.
4. Run the tests and by posttest analysis (the tedious part) determine if any boundaries are faulty and if so, how.
5. Run enough tests to verify every boundary of every domain.
Domain bugs and how to test for them:
• An interior point (Figure 4.10) is a point in the domain such that all points within an arbitrarily small distance
(called an epsilon neighborhood) are also in the domain.
• A boundary point is one such that within an epsilon neighborhood there are points both in the domain and not
in the domain.
• An extreme point is a point that does not lie between any two other arbitrary but distinct points of a (convex)
domain.
Figure shows generic domain bugs: closure bug, shifted boundaries, tilted boundaries, extra boundary, missing
boundary.
• We assumed that the boundary was to be open for A. The bug we're looking for is a closure error, which
converts > to >= or < to <= (Figure 4.13b). One test (marked x) on the boundary point detects this bug because
processing for that point will go to domain A rather than B.
• we've suffered a boundary shift to the left. The test point we used for closure detects this bug because the bug
forces the point from the B domain, where it should be, to A processing. Note that we can't distinguish
between a shift and a closure error, but we do know that we have a bug.
• Figure shows a shift the other way. The on point doesn't tell us anything because the boundary shift doesn't
change the fact that the test point will be processed in B. To detect this shift we need a point close to the
boundary but within A. The boundary is open, therefore by definition, the off point is in A (Open Off Inside).
• The same open off point also suffices to detect a missing boundary because what should have been processed
in A is now processed in B.
• To detect an extra boundary we have to look at two domain boundaries. In this context an extra boundary
means that A has been split in two. The two off points that we selected before (one for each boundary) does
the job. If point C had been a closed boundary, the on test point at C would do it.
• For closed domains, As for the open boundary, a test point on the boundary detects the closure bug. The rest
of the cases are similar to the open boundary, except now the strategy requires off points just outside the
domain.
10
• A and B are adjacent domains and the boundary is closed with respect to A, which means that it is open with
respect to B.
11
y >= 7 was intended. The off point (closed off outside) catches this bug. Figure 4.15c shows a shift down
that is caught by the two on points.
3. Tilted Boundary: A tilted boundary occurs when coefficients in the boundary inequality are wrong. For
example, 3x + 7y > 17 when 7x + 3y > 17 was intended. Figure 4.15d has a tilted boundary, which
creates erroneous domain segments A' and B'. In this example the bug is caught by the left on point.
4. Extra Boundary: An extra boundary is created by an extra predicate. An extra boundary will slice
through many different domains and will therefore cause many test failures for the same bug. The
extra boundary in Figure 4.15e is caught by two on points, and depending on which way the extra
boundary goes, possibly by the off point also.
5. Missing Boundary: A missing boundary is created by leaving a boundary predicate out. A missing
boundary will merge different domains and will cause many test failures although there is only one
bug. A missing boundary, shown in Figure 4.15f, is caught by the two on points because the processing
for A and B is the same - either A or B processing.
Procedure For Testing: The procedure is conceptually is straight forward. It can be done by hand for two dimensions
and for a few domains and practically impossible for more than two variables.
Introduction:
• Recall that we defined integration testing as testing the correctness of the interface between two otherwise
correct components.
• Components A and B have been demonstrated to satisfy their component tests, and as part of the act of
integrating them we want to investigate possible inconsistencies across their interface.
• Interface between any two components is considered as a subroutine call.
• We're looking for bugs in that "call" when we do interface testing.
• Let's assume that the call sequence is correct and that there are no type incompatibilities.
• For a single variable, the domain span is the set of numbers between (and including) the smallest value and the
largest value. For every input variable we want (at least): compatible domain spans and compatible closures
(Compatible but need not be Equal).
• The set of output values produced by a function is called the range of the function, in contrast with the domain,
which is the set of input values over which the function is defined.
• For most testing, our aim has been to specify input values and to predict and/or confirm output values that
result from those inputs.
12
• Interface testing requires that we select the output values of the calling routine i.e. caller's range must be
compatible with the called routine's domain.
• An interface test consists of exploring the correctness of the following mappings:
• caller domain --> caller range (caller unit test)
• caller range --> called domain (integration test)
• called domain --> called range (called unit test)
Closure compatibility:
• Assume that the caller's range and the called domain spans the same numbers - for example, 0 to 17.
• Figure 4.16 shows the four ways in which the caller's range closure and the called's domain closure can agree.
• The thick line means closed and the thin line means open. Figure shows the four cases consisting of domains
that are closed both on top (17) and bottom (0), open top and closed bottom, closed top and open bottom, and
open top and bottom.
Figure shows the twelve different ways the caller and the called can disagree about closure. Not all of them are
necessarily bugs. The four cases in which a caller boundary is open and the called is closed (marked with a "?")
are probably not buggy. It means that the caller will not supply such values but the called can accept them.
Span Compatibility:
• Figure shows three possibly harmless span incompatibilities.
Figure: Harmless Range / Domain Span incompatibility bug (Caller Span is smaller than Called).
13
• In all cases, the caller's range is a subset of the called's domain. That's not necessarily a bug.
• The routine is used by many callers; some require values inside a range and some don't. This kind of span
incompatibility is a bug only if the caller expects the called routine to validate the called number for the caller.
• Figure shows the opposite situation, in which the called routine's domain has a smaller span than the caller
expects. All of these examples are buggy.
•
• In Figure, the ranges and domains don't line up; hence good values are rejected, bad values are accepted, and
if the called routine isn't robust enough, we have crashes.
• Figure combines these notions to show various ways we can have holes in the domain: these are all probably
buggy.
• For interface testing, bugs are more likely to concern single variables rather than peculiar combinations of two
or more variables.
• Test every input variable independently of other input variables to confirm compatibility of the caller's range
and the called routine's domain span and closure of every domain defined for that variable.
• There are two boundaries to test and it's a one-dimensional domain; therefore, it requires one on and one off
point per boundary or a total of two on points and two off points for the domain - pick the off points
appropriate to the closure (COOOOI).
• Start with the called routine's domains and generate test points in accordance to the domain-testing strategy
used for that routine in component testing.
• Unless you're a mathematical whiz you won't be able to do this without tools for more than one variable at a
time.
14
Equality predicates are defined by equality equation such as x+y=9. They also specify lower dimensional
domain.Equality predicates support only few domain boundaries.
Inequality predicates are defined by inequality equation such as x+y<9, x+y>9. These are support most of the
domain boundaries.
10
UNIT-III
1) What is domain?
2) List the bugs that lead to domain errors? (Nov-2018 , June-2017)
3) What is ugly domain? (Nov-2016 , June-2017)
4) Differentiate between specified domain and implemented domain?
5) Define Domain Testing? (Nov-2016)
6) Write about Random Testing? (Nov-2017 , June-2018)
7) What is missing boundary bug?
8) What do you mean by equality and inequality predicates?
QB3
SOFTWARE TESTING
UNIT IV
Paths, Path products and Regular expressions: Path Products & Path Expression, Reduction Procedure, Applications,
Regular Expressions & Flow Anomaly Detection.
Logic Based Testing: Overview, Decision Tables, Path Expressions, KV Charts, Specifications.
• MOTIVATION:
o Flow graphs are being an abstract representation of programs.
o Any question about a program can be cast into an equivalent question about an appropriate flowgraph.
o Most software development, testing and debugging tools use flow graphs analysis techniques.
• PATH PRODUCTS:
o Normally flow graphs used to denote only control flow connectivity.
o The simplest weight we can give to a link is a name.
o Using link names as weights, we then convert the graphical flow graph into an equivalent algebraic like
expressions which denotes the set of all possible paths from entry to exit for the flow graph.
o Every link of a graph can be given a name.
o The link name will be denoted by lower case italic letters.
o In tracing a path or path segment through a flow graph, you traverse a succession of link names.
o The name of the path or path segment that corresponds to those links is expressed naturally by
concatenating those link names.
o For example, if you traverse links a,b,c and d along some path, the name for that path segment is abcd.
This path name is also called a path product. Figure shows some examples:
• Path expression:
o Consider a pair of nodes in a graph and the set of paths between those node.
o Denote that set of paths by Upper case letter such as X,Y. From Figure 5.1c, the members of the path
set can be listed as follows:
ac, abc, abbc, abbbc, abbbbc.............
o Alternatively, the same set of paths can be denoted by :
ac+abc+abbc+abbbc+abbbbc+...........
o The + sign is understood to mean "or" between the two nodes of interest, paths ac, or abc, or abbc,
and so on can be taken.
o Any expression that consists of path names and "OR"s and which denotes a set of paths between two
nodes is called a "Path Expression.".
• Path products:
o The name of a path that consists of two successive path segments is conveniently expressed by the
concatenation or Path Product of the segment names.
o For example, if X and Y are defined as X=abcde,Y=fghij,then the path corresponding to X followed by Y
is denoted by
XY=abcdefghij
o Similarly,
YX=fghijabcde
aX=aabcde
Xa=abcdea
XaX=abcdeaabcde
o If X and Y represent sets of paths or path expressions, their product represents the set of paths that can
be obtained by following every element of X by any element of Y in all possible ways. For example,
X = abc + def + ghi
Y = uvw + z
Then,
XY = abcuvw + defuvw + ghiuvw + abcz + defz + ghiz
o If a link or segment name is repeated, that fact is denoted by an exponent. The exponent's value
denotes the number of repetitions:
a1 = a; a2 = aa; a3 = aaa; an = aaaa .... n times.
Similarly, if
X = abcde
then
X1 = abcde
X2 = abcdeabcde = (abcde)2
X3 = abcdeabcdeabcde = (abcde)2abcde
= abcde(abcde)2 = (abcde)3
o The path product is not commutative (that is XY!=YX).
o The path product is Associative.
RULE 1: A (BC) = (AB) C=ABC
where A,B,C are path names, set of path names or path expressions.
o The zeroth power of a link name, path product, or path expression is also needed for completeness. It
is denoted by the numeral "1" and denotes the "path" whose length is zero - that is, the path that
doesn't have any links.
a0 = 1
X0 = 1
Path sums:
• The "+" sign was used to denote the fact that path names were part of the same set of paths.
• The "PATH SUM" denotes paths in parallel between nodes.
• Links a and b in Figure are parallel paths and are denoted by a + b. Similarly, links c and d are parallel paths
between the next two nodes and are denoted by c + d.
• The set of all paths between nodes 1 and 2 can be thought of as a set of parallel paths and denoted by
eacf+eadf+ebcf+ebdf.
• If X and Y are sets of paths that lie between the same pair of nodes, then X+Y denotes the UNION of those set
of paths. For example, in Figure:
o The first set of parallel paths is denoted by X + Y + d and the second set by U + V + W + h + i + j. The set
of all paths in this flow graph is f(X + Y + d)g(U + V + W + h + i + j)k
o The path is a set union operation; it is clearly Commutative and Associative.
RULE 2: X+Y=Y+X
RULE 3: (X+Y)+Z=X+(Y+Z)=X+Y+Z
• Distributive laws:
o The product and sum operations are distributive, and the ordinary rules of multiplication apply; that is
RULE 4: A(B+C)=AB+AC and (B+C)D=BD+CD
o Applying these rules to the below Figure 5.1a yields
e(a+b)(c+d)f=e(ac+ad+bc+bd)f = eacf+eadf+ebcf+ebdf
• Absorption rule:
o If X and Y denote the same set of paths, then the union of these sets is unchanged; consequently,
RULE 5: X+X=X (Absorption Rule)
o If a set consists of paths names and a member of that set is added to it, the "new" name, which is
already in that set of names, contributes nothing and can be ignored.
o For example,
if X=a+aa+abc+abcd+def then
X+a = X+aa = X+abc = X+abcd = X+def = X
It follows that any arbitrary sum of identical path expressions reduces to the same path expression.
• Loops:
o Loops can be understood as an infinite set of parallel paths. Say that the loop consists of a single link b.
then the set of all paths through that loop point is b0+b1+b2+b3+b4+b5+..............
This potentially infinite sum is denoted by b* for an individual link and by X* when X is a path expression.
REDUCTION PROCEDURE:
ALGORITHM:
• This section presents a reduction procedure for converting a flow graph whose links are labeled with names
into a path expression that denotes the set of all entry/exit paths in that flow graph. The procedure is a node-
by-node removal algorithm.
• The steps in Reduction Algorithm are as follows:
1. Combine all serial links by multiplying their path expressions.
2. Combine all parallel links by adding their path expressions.
3. Remove all self-loops (from any node to itself) by replacing them with a link of the form X*, where X is
the path expression of the link in that loop.
STEPS 4 - 8 ARE IN THE ALGORIHTM'S LOOP:
4. Select any node for removal other than the initial or final node. Replace it with a set of equivalent links
whose path expressions correspond to all the ways you can form a product of the set of in links with
the set of out links of that node.
o In the first way, we remove the self-loop and then multiply all outgoing links by Z*.
o In the second way, we split the node into two equivalent nodes, call them A and A' and put in a link
between them whose path expression is Z*. Then we remove node A' using steps 4 and 5 to yield
outgoing links whose path expressions are Z*X and Z*Y.
• Removing the loop and then node 6 result in the following expression:
a(bgjf)*b(c+gkh)d((ilhd)*imf(bjgf)*b(c+gkh)d)*(ilhd)*e
• You can practice by applying the algorithm on the following flowgraphs and generate their respective path
expressions:
APPLICATIONS:
o The purpose of the node removal algorithm is to present one very generalized concept- the path expression and
way of getting it.
o Every application follows this common pattern:
1. Convert the program or graph into a path expression.
2. Identify a property of interest and derive an appropriate set of "arithmetic" rules that characterizes the
property.
3. Replace the link names by the link weights for the property of interest. The path expression has now
been converted to an expression in some algebra, such as ordinary algebra, regular expressions, or
boolean algebra. This algebraic expression summarizes the property of interest over the set of all paths.
4. Simplify or evaluate the resulting "algebraic" expression to answer the question you asked.
• HOW MANY PATHS IN A FLOW GRAPH ?
o The question is not simple. Here are some ways you could ask it:
1. What is the maximum number of different paths possible?
2. What is the fewest number of paths possible?
3. How many different paths are there really?
4. What is the average number of paths?
o Determining the actual number of different paths is an inherently difficult problem because there could
be unachievable paths resulting from correlated and dependent predicates.
oIf we know both of these numbers (maximum and minimum number of possible paths) we have a good
idea of how complete our testing is.
o Asking for "the average number of paths" is meaningless.
• MAXIMUM PATH COUNT ARITHMETIC:
o Label each link with a link weight that corresponds to the number of paths that link represents.
o Also mark each loop with the maximum number of times that loop can be taken. If the answer is
infinite, you might as well stop the analysis because it is clear that the maximum number of paths will
be infinite.
o There are three cases of interest: parallel links, serial links, and loops.
o This arithmetic is an ordinary algebra. The weight is the number of paths in each set.
o EXAMPLE:
▪ The following is a reasonably well-structured program.
Each link represents a single link and consequently is given a weight of "1" to start. Lets say the outer loop will
be taken exactly four times and inner Loop Can be taken zero or three times Its path expression, with a little
work, is:
Path expression: a(b+c)d{e(fi)*fgj(m+l)k}*e(fi)*fgh
▪ A: The flow graph should be annotated by replacing the link name with the maximum of paths
through that link (1) and also note the number of times for looping.
▪ B: Combine the first pair of parallel loops outside the loop and also the pair in the outer loop.
▪ C: Multiply the things out and remove nodes to clear the clutter.
o Alternatively, you could have substituted a "1" for each link in the path expression and then simplified,
as follows:
a(b+c)d{e(fi)*fgj(m+l)k}*e(fi)*fgh
= 1(1 + 1)1(1(1 x 1)31 x 1 x 1(1 + 1)1)41(1 x 1)31 x 1 x 1
= 2(131 x (2))413
= 2(4 x 2)4 x 4
= 2 x 84 x 4 = 32,768
o This is the same result we got graphically.
o Actually, the outer loop should be taken exactly four times. That doesn't mean it will be taken zero or
four times. Consequently, there is a superfluous "4" on the outlink in the last step. Therefore the
maximum number of different paths is 8192 rather than 32,768.
• Structured flow graph:
o Structured code can be defined in several different ways that do not involve ad-hoc rules such as not
using GOTOs.
o A structured flow graph is one that can be reduced to a single link by successive application of the
transformations of Figure.
10
11
o The values of the weights are the number of members in a set of paths.
o EXAMPLE:
▪ Applying the arithmetic to the earlier example gives us the identical steps unitl step 3 (C) as below:
12
▪ If you observe the original graph, it takes at least two paths to cover and that it can be done in two paths.
▪ If you have fewer paths in your test plan than this minimum you probably haven't covered. It's another
check.
• CALCULATING THE PROBABILITY:
o Path selection should be biased toward the low - rather than the high-probability paths.
o This raises an interesting question:
What is the probability of being at a certain point in a routine?
This question can be answered under suitable assumptions, primarily that all probabilities involved are
independent, which is to say that all decisions are independent and uncorrelated.
o We use the same algorithm as before : node-by-node removal of uninteresting nodes.
o Weights, Notations and Arithmetic:
▪ Probabilities can come into the act only at decisions (including decisions associated with loops).
▪ Annotate each outlink with a weight equal to the probability of going in that direction.
▪ Evidently, the sum of the outlink probabilities must equal 1
▪ For a simple loop, if the loop will be taken a mean of N times, the looping probability is N/(N +
1) and the probability of not looping is 1/(N + 1).
▪ A link that is not part of a decision node has a probability of 1.
▪ The arithmetic rules are those of ordinary arithmetic.
▪ In this table, in case of a loop, PA is the probability of the link leaving the loop and PL is the
probability of looping.
▪ The rules are those of ordinary probability theory.
1. If you can do something either from column A with a probability of PA or from column B
with a probability PB, then the probability that you do either is PA + PB.
2. For the series case, if you must do both things, and their probabilities are independent
(as assumed), then the probability that you do both is the product of their probabilities.
▪ For example, a loop node has a looping probability of PL and a probability of not looping of PA,
which is obviously equal to I - PL.
13
▪ Following the above rule, all we've done is replace the outgoing probability with 1 - so why the
complicated rule? After a few steps in which you've removed nodes, combined parallel terms,
removed loops and the like, you might find something like this:
which is what we've postulated for any decision. In other words, division by 1 - PL renormalizes the out
link probabilities so that their sum equals unity after the loop is removed.
o EXAMPLE:
▪ Here is a complicated bit of logic. We want to know the probability associated with cases A, B,
and C.
▪ Let us do this in three parts, starting with case A. Note that the sum of the probabilities at each
decision node is equal to 1. Start by throwing away anything that isn't on the way to case A,
and then apply the reduction procedure. To avoid clutter, we usually leave out probabilities
equal to 1.
14
CASE A:
▪ Case B is simpler:
15
▪This checks. It's a good idea when doing this sort of thing to calculate all the probabilities and
to verify that the sum of the routine's exit probabilities does equal 1.
▪ If it doesn't, then you've made calculation error or, more likely, you've left out some branching
probability.
▪ How about path probabilities? That's easy. Just trace the path of interest and multiply the
probabilities as you go.
▪ Alternatively, write down the path name and do the indicated arithmetic operation.
▪ Say that a path consisted of links a, b, c, d, e, and the associated probabilities were .2, .5, 1.,
.01, and I respectively. Path abcbcbcdeabddea would have a probability of 5 x 10-10.
▪ Long paths are usually improbable.
• Mean processing time of a routine:
o Given the execution time of all statements or instructions for every link in a flowgraph and the
probability for each direction for all decisions are to find the mean processing time for the routine as a
whole.
o The model has two weights associated with every link: the processing time for that link, denoted by T,
and the probability of that link P.
o The arithmetic rules for calculating the mean time:
16
o EXAMPLE:
0. Start with the original flow graph annotated with probabilities and processing time.
1. Combine the parallel links of the outer loop. The result is just the mean of the processing times
for the links because there aren't any other links leaving the first node. Also combine the pair
of links at the beginning of the flow graph..
3. Use the cross-term step to eliminate a node and to create the inner self - loop.
4. Finally, you can get the mean processing time, by using the arithmetic rules as follows:
17
• PUSH/POP, GET/RETURN:
o This model can be used to answer several different questions that can turn up in debugging.
o It can also help decide which test cases to design.
o The question is:
Given a pair of complementary operations such as PUSH (the stack) and POP (the stack), considering
the set of all possible paths through the routine, what is the net effect of the routine? PUSH or POP?
How many times? Under what conditions?
o Here are some other examples of complementary operations to which this model applies:
o GET/RETURN a resource block.
o OPEN/CLOSE a file.
START/STOP a device or process.
o EXAMPLE 1 (PUSH / POP):
▪ Here is the Push/Pop Arithmetic:
▪ The numeral 1 is used to indicate that nothing of interest (neither PUSH nor POP) occurs on a
given link.
▪ "H" denotes PUSH and "P" denotes POP. The operations are commutative, associative, and
distributive.
18
▪ Below Table 5.9 shows several combinations of values for the two looping terms - M1 is the
number of times the inner loop will be taken and M2 the number of times the outer loop will
be taken.
19
▪ G(G + R)G(GR)*GGR*R
= G(G + R)G3R*R
= (G + R)G3R*
= (G4 + G2)R*
▪ This expression specifies the conditions under which the resources will be balanced on leaving
the routine.
▪ If the upper branch is taken at the first decision, the second loop must be taken four times.
▪ If the lower branch is taken at the first decision, the second loop must be taken twice.
▪ For any other values, the routine will not balance. Therefore, the first loop does not have to be
instrumented to verify this behavior because its impact should be nil.
• Limitations and solutions:
o The main limitation to these applications is the problem of unachievable paths.
o The node-by-node reduction procedure, and most graph-theory-based algorithms work well when all
paths are possible, but may provide misleading results when some paths are unachievable.
o The approach to handling unachievable paths (for any application) is to partition the graph into sub
graphs so that all paths in each of the sub graphs are achievable.
o The resulting sub graphs may overlap, because one path may be common to several different sub
graphs.
o Each predicate's truth-functional value potentially splits the graph into two sub graphs. For n
predicates, there could be as many as 2n sub graphs.
• THE PROBLEM:
o The generic flow-anomaly detection problem (note: not just data-flow anomalies, but any flow
anomaly) is that of looking for a specific sequence of options considering all possible paths through a
routine.
o Let the operations be SET and RESET, denoted by s and r respectively, and we want to know if there is a
SET followed immediately a SET or a RESET followed immediately by a RESET (an ss or an rr sequence).
o Some more application examples:
1. A file can be opened (o), closed (c), read (r), or written (w). If the file is read or written to after
it's been closed, the sequence is nonsensical. Therefore, cr and cw are anomalous. Similarly, if
the file is read before it's been written, just after opening, we may have a bug. Therefore, or is
20
also anomalous. Furthermore, oo and cc, though not actual bugs, are a waste of time and
therefore should also be examined.
2. A tape transport can do a rewind (d), fast-forward (f), read (r), write (w), stop (p), and skip (k).
There are rules concerning the use of the transport; for example, you cannot go from rewind to
fast-forward without an intervening stop or from rewind or fast-forward to read or write
without an intervening stop. The following sequences are anomalous: df, dr, dw, fd, and fr.
Does the flowgraph lead to anomalous sequences on any path? If so, what sequences and
under what circumstances?
3. The data-flow anomalies discussed in Unit 4 requires us to detect the dd, dk, kk,
and ku sequences. Are there paths with anomalous data flows?
• THE METHOD:
o Annotate each link in the graph with the appropriate operator or the null operator 1.
o Simplify things to the extent possible, using the fact that a + a = a and 12 = 1.
o You now have a regular expression that denotes all the possible sequences of operators in that graph.
You can now examine that regular expression for the sequences of interest.
o EXAMPLE: Let A, B, C, be nonempty sets of character sequences whose smallest string is at least one
character long. Let T be a two-character string of characters. Then if T is a substring of (i.e., if T appears
within) ABnC, then T will appear in AB2C. (HUANG's Theorem)
o As an example, let
A = pp
B = srr
C = rp
T = ss
The theorem states that ss will appear in pp(srr)nrp if it appears in pp(srr)2rp.
o However, let
A = p + pp + ps
B = psr + ps(r + ps)
C = rp
T = P4
Is it obvious that there is a p4 sequence in ABnC? The theorem states that we have only to look at
(p + pp + ps)[psr + ps(r + ps)]2rp
Multiplying out the expression and simplifying shows that there is no p4 sequence.
o Incidentally, the above observation is an informal proof of the wisdom of looping twice discussed in
Unit 2. Because data-flow anomalies are represented by two-character sequences, it follows the above
theorem that looping twice is what you need to do to find such anomalies.
• Limitations:
o Huang's theorem can be easily generalized to cover sequences of greater length than two characters.
Beyond three characters, though, things get complex and this method has probably reached its
utilitarian limit for manual application.
o There are some nice theorems for finding sequences that occur at the beginnings and ends of strings
but no nice algorithms for finding strings buried in an expression.
o Static flow analysis methods can't determine whether a path is or is not achievable. Unless the flow
analysis includes symbolic execution or similar techniques, the impact of unachievable paths will not be
included in the analysis.
o The flow-anomaly application, for example, doesn't tell us that there will be a flow anomaly - it tells us
that if the path is achievable, then there will be a flow anomaly. Such analytical problems go away, of
course, if you take the trouble to design routines for which all paths are achievable.
21
SOFTWARE TESTING
UNIT IV
Logic Based Testing: Overview, Decision Tables, Path Expressions, KV Charts, Specifications.
• INTRODUCTION:
o The functional requirements of many programs can be specified by decision tables, which provide a
useful basis for program and test design.
o Consistency and completeness can be analyzed by using boolean algebra, which can also be used as a
basis for test design. Boolean algebra is trivialized by using Karnaugh-Veitch charts.
o "Logic" is one of the most often used words in programmers' vocabularies but one of their least used
techniques.
o Boolean algebra is to logic as arithmetic is to mathematics. Without it, the tester or programmer is cut
off from many test and design techniques and tools that incorporate those techniques.
o Logic has been, for several decades, the primary tool of hardware logic designers.
o Many test methods developed for hardware logic can be adapted to software logic testing. Because
hardware testing automation is 10 to 15 years ahead of software testing automation, hardware testing
methods and its associated theory is a fertile ground for software testing methods.
o As programming and test techniques have improved, the bugs have shifted closer to the process front
end, to requirements and their specifications. These bugs range from 8% to 30% of the total and
because they're first-in and last-out, they're the costliest of all.
o The trouble with specifications is that they're hard to express.
o Boolean algebra (also known as the sentential calculus) is the most basic of all logic systems.
o Higher-order logic systems are needed and used for formal specifications.
o Much of logical analysis can be and is embedded in tools. But these tools incorporate methods to
simplify, transform, and check specifications, and the methods are to a large extent based on boolean
algebra.
o KNOWLEDGE BASED SYSTEM:
▪ The knowledge-based system (also expert system or “artificial intelligence” system has
become the programming construct of choice for many applications that were once considered
very difficult.
▪ Knowledge-based systems incorporate knowledge from a knowledge domain such as medicine,
law, or civil engineering into a database. The data can then be queried and interacted with to
provide solutions to problems in that domain.
▪ One implementation of knowledge-based systems is to incorporate the expert's knowledge
into a set of rules. The user can then provide data and ask questions based on that data.
▪ The user's data is processed through the rule base to yield conclusions (tentative or definite)
and requests for more data. The processing is done by a program called the inference engine.
▪ Understanding knowledge-based systems and their validation problems requires an
understanding of formal logic.
o Decision tables are extensively used in business data processing; Decision-table preprocessors as
extensions to COBOL are in common use; boolean algebra is embedded in the implementation of these
processors.
o Although programmed tools are nice to have, most of the benefits of boolean algebra can be reaped by
wholly manual means if you have the right conceptual tool: the Karnaugh-Veitch diagram is that
conceptual tool.
DECISION TABLES:
• The below Figure is a limited - entry decision table. It consists of four areas called the condition stub, the
condition entry, the action stub, and the action entry.
• Each column of the table is a rule that specifies the conditions under which the actions named in the action
stub will take place.
• The condition stub is a list of names of conditions.
1. Action 1 will be taken if predicates 1 and 2 are true and if predicates 3 and 4 are false (rule 1), or if
predicates 1, 3, and 4 are true (rule 2).
2. Action 2 will be taken if the predicates are all false, (rule 3).
3. Action 3 will take place if predicate 1 is false and predicate 4 is true (rule 4).
• In addition to the stated rules, we also need a Default Rule that specifies the default action to be taken
when all other rules fail. The default rules for Table in Figure 6.1 is shown in Figure.
o If the decision appears on a path, put in a YES or NO as appropriate. If the decision does not appear on
the path, put in an I, Rule 1 does not contain decision C, therefore its entries are: YES, YES, I, YES.
o The corresponding decision table is shown in Table 6.1
o Similarly, If we expand the immaterial cases for the above Table, it results in Table as below:
o Sixteen cases are represented in Table 6.1, and no case appears twice.
o Consequently, the flow graph appears to be complete and consistent.
o As a first check, before you look for all sixteen combinations, count the number of Y's and N's in each
row. They should be equal. We can find the bug that way.
• Another example - a trouble some program:
o Consider the following specification whose putative flow graph :
1. If condition A is met, do process A1 no matter what other actions are taken or what
other conditions are met.
2. If condition B is met, do process A2 no matter what other actions are taken or what
other conditions are met.
3. If condition C is met, do process A3 no matter what other actions are taken or what
other conditions are met.
4. If none of the conditions is met, then do processes A1, A2, and A3.
5. When more than one process is done, process A1 must be done first, then A2, and then
A3. The only permissible cases are: (A1), (A2), (A3), (A1,A3), (A2,A3) and (A1,A2,A3).
o Figure shows a sample program with a bug.
PATH EXPRESSIONS:
• General:
o Logic-based testing is structural testing when it's applied to structure (e.g., control flowgraph of an
implementation); it's functional testing when it's applied to a specification.
o In logic-based testing we focus on the truth values of control flow predicates.
o A predicate is implemented as a process whose outcome is a truth-functional value.
o For our purpose, logic-based testing is restricted to binary predicates.
o We start by generating path expressions by path tracing as in Unit V, but this time, our purpose is to
convert the path expressions into boolean algebra, using the predicates' truth values (e.g., A and ) as
weights.
• Boolean algebra:
o Steps:
1. Label each decision with an uppercase letter that represents the truth value of the
predicate. The YES or TRUE branch is labeled with a letter (say A) and the NO or FALSE
branch with the same letter overscored (say ).
2. The truth value of a path is the product of the individual labels. Concatenation or
products mean "AND". For example, the straight-through path of Figure 6.5, which goes
via nodes 3, 6, 7, 8, 10, 11, 12, and 2, has a truth value of ABC. The path via nodes 3, 6,
7, 9 and 2 has a value of .
3. If two or more paths merge at a node, the fact is expressed by use of a plus sign (+)
which means "OR".
o Using this convention, the truth-functional values for several of the nodes can be expressed in terms of
segments from previous nodes. Use the node name to identify the point.
o There are only two numbers in boolean algebra: zero (0) and one (1). One means "always true" and
zero means "always false".
o Rules of boolean algebra:
▪ Boolean algebra has three operators: X (AND), + (OR) and (NOT)
▪ X : meaning AND. Also called multiplication. A statement such as AB (A X B) means "A
and B are both true". This symbol is usually left out as in ordinary algebra.
▪ + : meaning OR. "A + B" means "either A is true or B is true or both".
▪ meaning NOT. Also negation or complementation. This is read as either "not A" or "A
bar". The entire expression under the bar is negated.
▪ The following are the laws of boolean algebra:
o In all of the above, a letter can represent a single sentence or an entire boolean algebra expression.
o Individual letters in a boolean algebra expression are called Literals (e.g. A,B)
o The product of several literals is called a product term (e.g., ABC, DE).
o An arbitrary boolean expression that has been multiplied out so that it consists of the sum of products
(e.g., ABC + DEF + GH) is said to be in sum-of-products form.
o The result of simplifications (using the rules above) is again in the sum of product form and each
product term in such a simplified version is called a prime implicant. For example, ABC + AB + DEF
reduces by rule 20 to AB + DEF; that is, AB and DEF are prime implicants.
o The path expressions of Figure 6.5 can now be simplified by applying the rules.
o The following are the laws of boolean algebra:
o Similarly,
o The deviation from the specification is now clear. The functions should have been:
• Loops complicate things because we may have to solve a boolean equation to determine what
predicate-value combinations lead to where.
KV CHARTS:
• Introduction:
o If you had to deal with expressions in four, five, or six variables, you could get bogged down in the
algebra and make as many errors in designing test cases as there are bugs in the routine you're testing.
o Karnaugh-Veitch chart reduces boolean algebraic manipulations to graphical trivia.
o Beyond six variables these diagrams get cumbersome and may not be effective.
• Single Variable:
o Figure shows all the boolean functions of a single variable and their equivalent representation as a KV
chart.
o The charts show all possible truth values that the variable A can have.
o A "1" means the variable’s value is "1" or TRUE. A "0" means that the variable's value is 0 or FALSE.
o The entry in the box (0 or 1) specifies whether the function that the chart represents is true or false for
that value of the variable.
o We usually do not explicitly put in 0 entries but specify only the conditions under which the function is
true.
• Two variables:
o Figure shows eight of the sixteen possible functions of two variables.
o Each box corresponds to the combination of values of the variables for the row and column of that box.
o A pair may be adjacent either horizontally or vertically but not diagonally.
o Any variable that changes in either the horizontal or vertical direction does not appear in the
expression.
o In the fifth chart, the B variable changes from 0 to 1 going down the column, and because the A
variable's value for the column is 1, the chart is equivalent to a simple A.
o The first chart has two 1's in it, but because they are not adjacent, each must be taken separately.
o They are written using a plus sign.
o It is clear now why there are sixteen functions of two variables.
o Each box in the KV chart corresponds to a combination of the variables' values.
o That combination might or might not be in the function (i.e., the box corresponding to that
combination might have a 1 or 0 entry).
o Since n variables lead to 2n combinations of 0 and 1 for the variables, and each such combination (box)
can be filled or not filled, leading to 22n ways of doing this.
o Consequently for one variable there are 221 = 4 functions, 16 functions of 2 variables, 256 functions of 3
variables, 16,384 functions of 4 variables, and so on.
o Given two charts over the same variables, arranged the same way, their product is the term by term
product, their sum is the term by term sum, and the negation of a chart is gotten by reversing all the 0
and 1 entries in the chart.
10
OR
• Three variables:
o KV charts for three variables are shown below.
o As before, each box represents an elementary term of three variables with a bar appearing or not
appearing according to whether the row-column heading for that box is 0 or 1.
o A three-variable chart can have groupings of 1, 2, 4, and 8 boxes.
o A few examples will illustrate the principles:
11
SPECIFICATIONS:
12
It is defined as an expression which represents set of all the possible paths between any entry and an
exit nodes. Path expressions play an important role during debugging. It surveys the structure of any flow
graph which includes number of paths, time required to process a path, any inconsistency among data flow.
• In the first way, we remove the self-loop and then multiply all outgoing links by Z*.
• In the second way, we split the node into two equivalent nodes, call them A and A' and put in a link between them whose
path expression is Z*. Then we remove node A' using steps 4 and 5 to yield outgoing links whose path expressions are Z*X
and Z*Y.
5) List the three arithmetic rules used to calculate mean processing time?
Parallel rule: Is the arithmetic mean of all execution time over all parallel links in a flow graph.
Series rule: It is the sum of two execution times.
Loop rule: Is the combination of parallel and series rule.
11
12
UNIT-IV
1) Discuss how the decision tables can be Basis for test case design?
2) Explain about KV Charts for one, two & three Variables and its
Specifications? (Nov-2017 , June-2018)
3) Explain Regular expressions and flow anomaly detection? (Nov-2018 , June-
2018)
4) What is the looping probability of a path expression? Explain with an
example. (Nov-2018 , June-2018)
QB4
SOFTWARE TESTING
UNIT V
State, State Graphs and Transition Testing: State Graphs, Good & Bad State Graphs, State Testing, Testability Tips.
STATE GRAPHS:
The word “state” is a combination of circumstances or attributes belonging for the time being to a person or thing.
➢ A program that detects the character sequence “ZCZC” can be in the following states:
1. Neither ZCZC nor any part of it has been detected.
2. Z has been detected.
3. ZC has been detected.
4. ZCZ has been detected.
5. ZCZC has been detected.
➢ A moving automobile whose engine is running can have the following states with respect to its transmission:
1. Reverse gear
2. Neutral gear
3. First gear
4. Second gear
5. Third gear
6. Fourth gear
➢ Whatever is being modeled is subjected to inputs. As a result of those inputs, the state changes, or is said to have
made a transition.
➢ Transitions are denoted by links that join the states.
➢ The ZCZC detection example can have the following kinds of inputs:
1. Z
2. C
3. Any character other than Z or C, which we’ll denote by A
➢ The state graph is interpreted as follows:
1. If the system is in the “NONE” state, any input other than a Z will keep it in that state.
2. If a Z is received, the system transitions to the “Z” state.
3. If the system is in the “Z” state and a Z is received, it will remain in the “Z” state. If a C is received,
it will go to the “ZC” state; if any other character is received, it will go back to the “NONE” state
because the sequence has been broken.
4. A Z received in the “ZC” state progresses to the “ZCZ” state, but any other character breaks the
sequence and causes a return to the “NONE” state.
5. A C received in the “ZCZ” state completes the sequence and the system enters the “ZCZC” state. A
Z breaks the sequence and causes a transition back to the “Z” state; any other character causes a
return to the “NONE” state.
Outputs
➢ An output can be associated with any link.
➢ Outputs are denoted by letters or words and are separated from inputs by a slash as follows: “input/output.”
➢ Output denotes anything of interest that’s observable and is not restricted to explicit outputs by devices.
➢ Outputs are also link weights. If every input associated with a transition causes the same output, then denote it as:
“input 1, input 2 . . . input 3/output.”
➢ The state table or state-transition table specifies the states, the inputs, the transitions, and the outputs.
1. Each row of the table corresponds to a state.
2. Each column corresponds to an input condition.
3. The box at the intersection of a row and column specifies the next state (the transition) and the
output, if any.
1. A table or process that encodes the input values into a compact list (INPUT_CODE_TABLE).
2. A table that specifies the next state for every combination of state and input code (TRANSITION_TABLE).
3. A table or case statement that specifies the output or output code, if any, associated with every state-input
combination (OUTPUT_TABLE).
4. A table that stores the present state of every device or process that uses the same state table—e.g., one entry
per tape transport (DEVICE_TABLE).
➢ Similarly, “state-symbol product” means the hypothetical (or actual) concatenation used to combine the state and
input codes.
➢ What constitutes a good or a bad state graph is to some extent biased by the kinds of state graphs that are likely to
be used in a software test design context. Here are some principles for judging:
1. The total number of states is equal to the product of the possibilities of factors that make up the state.
2. For every state and input there is exactly one transition specified to exactly one, possibly the same,
state.
3. For every transition there is one output action specified. That output could be trivial, but at least one
output does something sensible.
4. For every state there is a sequence of inputs that will drive the system back to the same state.
Impossible States
Some combinations of factors may appear to be impossible. Say that the factors are:
Because the states we deal with inside computers are not the states of the real world but rather a numerical
representation of real-world states, the “impossible” states can occur.
Equivalent States
Two states are equivalent if every sequence of inputs starting from one state produces exactly the same sequence of
outputs when started from the other state. This notion can also be extended to sets of states.
Rule 1: The program will maintain an error counter, which will be incremented whenever there’s an error.
Rule 2: If there is an error, rewrite the block.
Rule 3: If there have been three successive errors, erase 10 centimeters of tape and then rewrite the block.
Rule 4: If there have been three successive erasures and another error occurs, put the unit out of service.
Rule 5: If the erasure was successful, return to the normal state and clear the error counter.
Rule 6: If the rewrite was unsuccessful, increment the error counter, advance the state, and try another rewrite.
Rule 7: If the rewrite was successful, decrement the error counter and return to the previous state.
Rule 3: If there have been three successive errors, erase 10 centimeters of tape and then rewrite the block.
Rule 3, if followed blindly, causes an unnecessary rewrite. It’s a minor bug, so let it go for now, but it pays to check such
things. There might be an arcane security reason for rewriting, erasing, and then rewriting again.
Rule 4: If there have been three successive erasures and another error occurs, put the unit out of service.
Rule 4 terminates our interest in this state graph so we can dispose of states beyond 6. The details of state 6 will not be
covered by this specification; presumably there is a way to get back to state 0. Also, we can credit the specifier with
enough intelligence not to have expected a useless rewrite and erase prior to going out of service.
Rule 5: If the erasure was successful, return to the normal state and clear the counter.
Rule 6: If the rewrite was unsuccessful, increment the error counter, advance the state, and try another rewrite.
Because the value of the error counter is the state, and because rules I and 2 specified the same action, there seems to
be no point to rule 6 unless yet another rewrite was wanted. Furthermore, the order of the actions is wrong. If the
state is advanced before the rewrite, we could end up in the wrong state. The proper order should have been: output =
attempt-rewrite and then increment the error counter.
Rule 7: If the rewrite was successful, decrement the error counter and return to the previous state.
Rule 7 got rid of the ambiguities but created contradictions. The specifier’s intention was probably:
Rule 7A: If there have been no erasures and the rewrite is successful, return to the previous state.
Unreachable States
➢ An unreachable state is like unreachable code—a state that no input sequence can reach. An unreachable state is
not impossible, just as unreachable code is not impossible.
➢ There are two possibilities: (1) There is a bug; that is, some transitions are missing. (2) The transitions are there, but
you don’t know about it;
Dead States
A dead state, (or set of dead states) is a state that once entered cannot be left.
Output Errors
The states, the transitions, and the inputs could be correct, there could be no dead or unreachable states, but the
output for the transition could be incorrect. Output actions must be verified independently of states and transitions.
That is, you should distinguish between a program whose state graph is correct but has the wrong output for a
transition and one whose state graph is incorrect.
Encoding Bugs
➢ Encoding bugs for input coding, output coding, state codes, and state-symbol product formation could exist as such
only in an explicit finite-state machine implementation.
➢ The behavior of a finite-state machine is invariant under all encodings.
STATE TESTING
Impact of Bugs
➢ A bug can manifest itself as one or more of the following symptoms:
1. Wrong number of states.
2. Wrong transition for a given state-input combination.
3. Wrong output for a given transition.
4. Pairs of states or sets of states that are inadvertently made equivalent (factor lost).
5. States or sets of states that are split to create in equivalent duplicates.
6. States or sets of states that have become dead.
7. States or sets of states that have become unreachable.
➢ A path in a state graph is a succession of transitions caused by a sequence of inputs.
➢ The starting point of state testing is:
1. Define a set of covering input sequences that get back to the initial state when starting from the
initial state.
2. For each step in each input sequence, define the expected next state, the expected transition, and the
expected output code.
➢ A set of tests, then, consists of three sets of sequences:
1. Input sequences.
2. Corresponding transitions or next-state names.
3. Output sequences.
TESTABILITY TIPS
Switches, Flags, and Unachievable Paths
➢ The functionality of switches, flags is referred to the same term state testing.
➢ Switches/flags are used as essential tool for state testing to cover or test the Finite State Machine in every possible
state.
➢ A Switch/flag is set at initial process of Finite State Machine then this value is evaluated and tested.
The advantages of this implementation are that if any of the combinations are not needed, we merely clip out that
part of the decision tree:
Design Guidelines
1. Learn how it’s done in hardware. I know of no books on finite-state machine design for programmers.
2. Start by designing the abstract machine. Verify that it is what you want to do.
3. Start with an explicit design—that is, input encoding, output encoding, state code assignment, transition table,
output table, state storage, and how you intend to form the state-symbol product.
4. Before you start taking shortcuts, see if it really matters.
5. Take shortcuts by making things implicit only as you must to make significant reductions in time or space and
only if you can show that such savings matter in the context of the whole system.
6. Consider a hierarchical design if you have more than a few dozen states.
7. Build, buy, or implement tools and languages that implement finite-state machines as software if you’re doing
more than a dozen states routinely.
8. Build in the means to initialize to any arbitrary state. Build in the transition verification instrumentation (the
coverage analyzer). These are much easier to do with an explicit machine.
10
SOFTWARE TESTING
UNIT V
Graph Matrices and Application: Motivational Overview, Matrix of Graph, Relations, Power of a Matrix, Node
Reduction Algorithm, Building Tools. (Student should be given an exposure to a tool like JMeter or Win-runner).
MOTIVATIONAL OVERVIEW:
Graph matrices are introduced as another representation for graphs; some useful tools resulting there are examined.
Matrix operations, relations, node-reduction algorithm revisited, equivalence class partitions.
➢ Path tracing is not easy, and it’s subject to error. You can miss a link here and there or cover some links twice.
➢ One solution to this problem is to represent the graph as a matrix and to use matrix operations equivalent to path
tracing.
MATRIX OF GRAPH
A graph matrix is a square array with one row and one column for every node in the graph. Each row-column
combination corresponds to a relation between the node corresponding to the row and the node corresponding to
the column.
Observe the following:
1. The size of the matrix (i.e., the number of rows and columns) equals the number of nodes.
2. There is a place to put every possible direct connection or link between any node and any other node.
3. The entry at a row and column intersection is the link weight of the link (if any) that connects the two nodes in that
direction.
4. A connection from node i to node j does not imply a connection from node j to node i.
5. If there are several links between two nodes, then the entry is a sum; the “+” sign denotes parallel links as usual.
In general, an entry is not just a simple link name but a path expression corresponding to the paths between the pair
of nodes.
A matrix with weights defined like this is called a connection matrix. The connection matrix is obtained by replacing
each entry with I if there is a link and 0 if there isn’t.
Further Notation
➢ To compact things, the entry corresponding to node i and column j, which is to say the link weights between
nodes i and j, is denoted by aij.
➢ self-loop about node i is denoted by aii, while the link weight for the link between nodes j and i is denoted by
A
aji. The path segments expressed in terms of link names and, in this notation, for several paths in the graph
➢ The expression “aijajjajm” denotes a path from node i to j, with a self-loop at j and then a link from node j to node
m. The expression “aijajkakmami” denotes a path from node i back to node i via nodes j, k, and m. An expression
such as “aikakmamj+ ainanpapj” denotes a pair of paths between nodes i and j, one going via nodes k and m and the
other via nodes n and p.
➢ This notation may seem cumbersome, but it’s not intended for working with the matrix of a graph but for
expressing operations on the matrix. It’s a very compact notation. For example,
denotes the set of all possible paths between nodes i and j via one intermediate node. But because “i” and “j”
denote any node, this expression is the set of all possible paths between any two nodes via one intermediate node.
➢ The transpose of a matrix is the matrix with rows and columns interchanged. It is denoted by a superscript letter
“T,” as in AT. If C = AT then cij = aji.
➢ The intersection of two matrices of the same size, denoted by A#B is a matrix obtained by an element-by-
element multiplication operation on the entries. For example, C = A#B means cij = aij#bij. The multiplication
operation is usually boolean AND or set intersection.
➢ Similarly, the union of two matrices is defined as the element-by-element addition operation such as a boolean
OR or set union.
RELATIONS
➢ A link weight can be numerical, logical, illogical, objective, subjective, or whatever. Furthermore, there is no limit
to the number and type of link weights that one may associate with a relation.
➢ “Is connected to” is just about the simplest relation there is: it is denoted by an un weighted link. Graphs defined
over “is connected to” are called, as we said before, connection matrices.* For more general relations, the matrix
is called a relation matrix.
Properties of Relations
Equivalence Relations
An equivalence relation is a relation that satisfies the reflexive, transitive, and symmetric properties.
Let A be a matrix whose entries are aij. The set of all paths between any node i and any other node j (possiblyiitself), via
all possible intermediate nodes, is given by
Given two matrices A and B, with entries aik and bkj, respectively, their product is a new matrix C, whose entries are cij,
where:
The indexes of the product [e.g., (3,2) in C32] identify, respectively, the row of the first matrix and the column of the
second matrix that will be combined to yield the entry for that product in the product matrix.
The C32 entry is obtained by combining, element by element, the entries in the third row of the A matrix with the
corresponding elements in the second column of the B matrix. I use two hands. My left hand points and traces across
the row while the right points down the column of B. It’s like patting your head with one hand and rubbing your
stomach with the other at the same time: it takes practice to get the hang of it. Applying this to the matrix of yields:
A2A = AA2
A =A A2, (A2)2, A3A, AA3
4 2
This is an eloquent, but practically useless, expression. Let I be an n by n matrix, where n is the number of nodes. Let I’s
entries consist of multiplicative identity elements along the principal diagonal. For link names, this can be the number
“1.” For other kinds of weights, it is the multiplicative identity for those weights. The above product can be re-phrased
as:
A(I + A + A2 + A3 + A4 . . . A∞)
But often for relations, A + A = A, (A + I) = A2 + A +A + I A2 + A + I. Furthermore, for any finite n,
2
(A + I) n = I + A + A2 + A3 . . . An
Therefore, the original infinite sum can be replaced by
This is an improvement, because in the original expression we had both infinite products and infinite sums, and now we
have only one infinite product to contend with. The above is valid whether or not there are loops. If we restrict our
interest for the moment to paths of length n – 1, where n is the number of nodes, the set of all such paths is given by
This is an interesting set of paths because, with n nodes, no path can exceed n – 1 nodes without incorporating some
path segment that is already incorporated in some other path or path segment. Finding the set of all such paths is
somewhat easier because it is not necessary to do all the intermediate products explicitly. The following algorithm is
effective:
1. Express n – 2 as a binary number.
2. Take successive squares of (A + I), leading to (A + I)2, (A + I)4, (A + 1)8, and so on.
3. Keep only those binary powers of (A + 1) that correspond to a 1 value in the binary representation of n – 2.
4. The set of all paths of length n – 1 or less is obtained as the product of the matrices you got in step 3 with the
original matrix.
As an example, let the graph have 16 nodes. We want the set of all paths of length less than or equal to 15. The binary
representation of n – 2 (14) is 23 + 22 + 2. Consequently, the set of paths is given by
A matrix for which A2 = A is said to be idempotent. A matrix whose successive powers eventually yields an idempotent
matrix is called an idempotent generator—that is, a matrix for which there is a k such that Ak+1 = Ak.
Partitioning Algorithm
The graph may have loops. We would like to partition the graph by grouping nodes in such a way that every loop is
contained within one group or another. Such a graph is partly ordered. There are many used for an algorithm that does
that:
1. We might want to embed the loops within a subroutine so as to have a resulting graph which is loop-free at the top
level.
2. Many graphs with loops are easy to analyze if you know where to break the loops.
3. While you and I can recognize loops, it’s much harder to program a tool to do it unless you have a solid algorithm on
which to base the tool.
You can recognize equivalent nodes by simply picking a row (or column) and searching the matrix for identical rows.
Mark the nodes that match the pattern as you go and eliminate that row. Then start again from the top with another
row and another pattern. Eventually, all rows have been grouped. The algorithm leads to the following equivalent node
sets:
A = [1] B = [2,7] C = [3,4,5] D = [6] E = [8] whose graph is
NODE-REDUCTION ALGORITHM
The advantage of the matrix-reduction method is that it is more methodical than the graphical method and does not entail
continually redrawing the graph. It’s done as follows:
1. Select a node for removal; replace the node by equivalent links that bypass that node and add those links to the
links they parallel.
2. Combine the parallel terms and simplify as you can.
3. Observe loop terms and adjust the out links of every node that had a self-loop to account for the effect of the loop.
4. The result is a matrix whose size has been reduced by 1. Continue until only the two nodes of interest exist.
The Algorithm
The first step is the most complicated one: eliminating a node and replacing it with a set of equivalent links.
The reduction is done one node at a time by combining the elements in the last column with the elements in the last
row and putting the result into the entry at the corresponding intersection.
In the above case, the f in column 5 is first combined with h*g in column 2, and the result (fh*g) is added to the c term
just above it. Similarly, the f is combined with h*e in column 3 and put into the 4,3 entry just above it.
If any loop terms had occurred at this point, they would have been taken care of by eliminating the loop term and pre
multiplying every term in that row by the loop term starred. There are no loop terms at this point. The next node to be
removed is node 4. The b term in the (3,4) position will combine with the (4,2) and (4,3) terms to yield a (3,2) and a
(3,3) term, respectively. Carrying this out and discarding the unnecessary rows and columns yields:
There is only one node to remove now, node 3. This will result in a term in the (1,2) entry whose value is
This is the path expression from node 1 to node 2. Stare at this one for awhile before you object to the (bfh*e)* term
that multiplies the d; any fool can see the direct path via d from node 1 to the exit, but you could miss the fact that the
routine could circulate around nodes 3, 4, and 5 before it finally took the d link to node 2.
BUILDING TOOLS
We can represent the matrix as a two-dimensional array for small graphs with simple weights, but this is not
convenient for larger graphs because:
1. Space—Space grows as n2 for the matrix representation, but for a linked list only as kn, where k is a small number
such as 3 or 4.
2. Weights—Most weights are complicated and can have several components. That would require an additional weight
matrix for each such weight.
3. Variable-Length Weights—If the weights are regular expressions, say, or algebraic expressions (which is what we
need for a timing analyzer), then we need a two-dimensional string array, most of whose entries would be null.
4. Processing Time—Even though operations over null entries are fast, it still takes time to access such entries and
discard them. The matrix representation forces us to spend a lot of time processing combinations of entries that we
know will yield null results.
Linked-List Representation
Give every node a unique name or number. A link is a pair of node names.
The link names will usually be pointers to entries in a string array where the actual link weight expressions are stored. If
the weights are fixed length then they can be associated directly with the links in a parallel, fixed entry-length array.
Let’s clarify the notation a bit by using node names and pointers.
The node names appear only once, at the first link entry. Also, instead of naming the other end of the link, we have just
the pointer to the list position in which that node starts. Finally, it is also very useful to have back pointers for the in
links. Doing this we get
10
13
7) Define FSM?
A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton,
or simply a state machine, is a mathematical model of computation. .... An FSM is defined by a list of its
states, its initial state, and the conditions for each transition.
14
UNIT-V
1) Define a state?
2) Define State table? (Nov-2018 , June-2017)
3) Define Unreachable State? (June-2016)
4) When the two states are said to be equivalent states?
5) Advantages of state testing?
6) What is cyclomatic complexity? (Nov-2016)
7) Define FSM? (Nov-2018 , June-2018)
1) Explain about State Graph of its Good and Bad with example?(Nov-2018 ,
June-2017)
2) Explain about Node Reduction Algorithm? (Nov-2017 , June-2018)
3) What are Graph matrices and explain their applications. (Nov-2018 )
QB5