Unit 4
Unit 4
Unit 4
Project Management
Course Outcomes
CO1: Identify the suitable process model for software
project development.
CO2: Describe the requirements engineering process and
specification.
CO3: Apply systematic procedures for complete software
design.
CO4: Demonstrate the various testing level and strategies
applied in validating the software.
CO5: Summarize the project management activities for
successful management of the software.
2
Syllabus
UNIT I- Software Process
Introduction to Software Engineering
Software Life Cycle Model
UNIT II- Requirement Analysis and Specification
Functional and Non-Functional Requirements
Requirement Analysis and Design,Validation
UNIT III- Design
Design Process- Good Design- Functional Independence- Cohesion and
Coupling
Structured Analysis- Data Flow Diagram
Structured Design – Structured Chart
UNIT IV –Testing
Testing Process, Strategies,Types
UNIT V – Project Management
Project Planning, Estimation and Scheduling
Requirement Gathering, Analysis and Specification
Formal and Informal specification of Requirement
Organizational and Team Structure
3
Text Books and References
Text Books:
Roger Pressman , Bruce Maxim, “Software Engineering – A Practitioner‟s Approach”,
Nineth Edition, McGraw Hill International Edition, 2019.
Ian Sommerville, “Software Engineering”, Tenth Edition, Pearson Education Asia, 2016.
Reference Books:
Rajib Mall, “Fundamentals of Software Engineering”, Fifth Edition, PHI Learning Private
Limited ,2018.
PankajJalote, “Software Engineering, A Precise Approach”, Wiley India, 2010.
Watts S. Humphrey., “Managing the Software Process”, Pearson Education, 2008.
Websites:
http://www.nptelvideos.in/2012/11/software-engineering.html
https://www.projectengineer.net/the-earned-value-formulas/
https://www.smartdraw.com/downloads/
https://www.visualparadigm.com/support/documents/vpuserguide.js
4
UNIT I: SOFTWARE PROCESS:
Introduction to Software Engineering, Software Process
Software Process Models:
Waterfall Model, Incremental model, Evolutionary model, Agile
process model: Extreme Programming, Scrum.
5
UNIT III
SOFTWARE DESIGN
Design process – Design Concepts
Design Model:
Architectural Design : Software Architecture- Architectural styles, DFD
Model, Architectural Mapping using Data Flow
Component level Design: Component - Design Guidelines-Cohesion
and Coupling
User Interface Design: Golden Rules- Interface analysis and Design.
6
Unit IV
Software testing fundamentals-Testing Process
Software testing Strategy: Unit Testing – Integration
Testing – Validation Testing – System Testing.
White box testing- basis path testing and control
structure testing
black box testing- Regression Testing – Debugging
Testing Tools.
7
Why testing is important?
Software testing provides an independent view and
objective of the software and gives surety of fitness of
the software.
It involves testing of all components under the
required services to confirm that whether it is satisfying
the specified requirements or not.
The process is also providing the client with
information about the quality of the software.
Testing is mandatory because it will be a dangerous
situation if the software fails any of time due to
lack of testing. So, without testing software cannot be
deployed to the end user.
8
Software Testing Fundamentals
Software testing is a process of identifying the correctness of
software by considering its all attributes (Reliability, Scalability,
Portability, Usability, Re-usability, ) and evaluating the execution of
software components to find the software bugs or errors or
defects.
The goal of testing is to find errors, and a good test is one that has a
high probability of finding an error.
Testability. James Bach provides the following definition for testability:
“Software testability is simply how easily [a computer program]
can be tested.”
Following characteristics lead to testable software.
Operability. “The better it works, the more efficiently it can be tested.”
relatively few bugs will be there in good operable software.
Observability. “What you see is what you test.”
When input and output are observable, it is easy to verify the state of the software.
9
Software Testing Fundamentals
Controllability. “The better we can control the software, the more the testing
can be automated and optimized.”
By fixing the input we can control the state of the software/hardware. All combination of
input can be generated to test the output generated. This enables automated testing.
Decomposability. “By controlling the scope of testing, we can more quickly
isolate problems and perform smarter retesting.”
When the program is modulized, component based testing is possible as independent unit.
Simplicity. “The less there is to test, the more quickly we can test it.”
functional simplicity (e.g., the feature set is the minimum necessary to meet
requirements);
structural simplicity (e.g., architecture is modularized to limit the propagation of faults),
and
code simplicity (e.g., a coding standard is adopted for ease of inspection and
maintenance).
10
Software Testing Fundamentals
Stability. “The fewer the changes, the fewer the
disruptions to testing.”
controlled changes will introduce less bug into the code
Understandability. “The more information we have,
the smarter we will test.”
Access to technical document helps better understanding
of requirement and also testing.
11
Attributes of Good “tests”
A good test has a high probability of finding an
error.
A good test is not redundant.
A good test should be “best of breed”.
A good test should be neither too simple nor too
complex.
12
Type of Software testing
We have various types of testing available in the market, which are used to test
the application or the software.
With the help of below image, we can easily understand the type of software
testing:
13
Manual testing
The process of checking the functionality of an
application as per the customer needs without taking any
help of automation tools is known as manual testing.
While performing the manual testing on any application,
we do not need any specific knowledge of any
testing tool, rather than have a proper
understanding of the product so we can easily
prepare the test document.
Manual testing can be further divided into three types of
testing, which are as follows:
White box testing
Black box testing
Gray box testing
14
Automation testing
Automation testing is a process of converting any
manual test cases into the test scripts with the
help of automation tools, or any programming
language is known as automation testing.
With the help of automation testing, we can enhance
the speed of our test execution because here, we do
not require any human efforts.
We need to write a test script and execute those
scripts.
15
Software Testing Life Cycle (STLC)
The procedure of software testing is also known as STLC (Software
Testing Life Cycle) which includes phases of the testing process.
The testing process is executed in a well-planned and systematic
manner. All activities are done to improve the quality of the software
product.
Software testing life cycle contains the following steps:
Requirement Analysis
Test Plan Creation
Environment setup
Test case Execution
Defect Logging
Test Cycle Closure
16
Software Testing Life Cycle (STLC)
17
Software Testing Life Cycle (STLC)
1. Requirement analysis
Requirement analysis involves identifying, analyzing, and
documenting the requirements of a software system.
During requirement analysis, the software testing team works
closely with the stakeholders to gather information about
the system’s functionality, performance, and usability.
The requirements document serves as a blueprint for the
software development team, guiding them in creating the
software system.
It also serves as a reference point for the testing team,
helping them design and execute effective test cases to
ensure the software meets the requirements.
18
Software Testing Life Cycle (STLC)
2.Test planning
During the test planning phase, the team develops a complete
plan outlining each testing process step, including identifying
requirements, determining the target audience, selecting
appropriate testing tools and methods, defining roles and
responsibilities, and defining timelines.
This phase aims to ensure that all necessary resources are in
place and everyone on the team understands their roles and
responsibilities.
A well-designed test plan minimizes risks by ensuring that
potential defects are identified early in the development cycle
when they are easier to fix.
Also, adhering to the plan throughout the testing process
fosters thoroughness and consistency in testing efforts which
can save time and cost down the line.
19
Software Testing Life Cycle (STLC)
3.Test case development
During the test case development phase, the team thoroughly
tests the software and considers all possible scenarios.
20
Software Testing Life Cycle (STLC)
4.Test environment setup
Test environment setup in software testing life refers to
creating an environment that simulates the
production system where the software
application is deployed. A person can ensure efficient
and effective testing activities by designing the test
environment correctly.
21
Software Testing Life Cycle (STLC)
5.Test execution
Test execution refers to the software testing life cycle
phase where created test cases are executed on
the actual system being tested.
At this stage, testers verify whether features,
functions, and requirements prescribed in earlier
phases perform as expected.
The test execution also involves the execution of
automated test cases.
22
Software Testing Life Cycle (STLC)
6.Test closure
Test closure is integral to the STLC and includes
completing all planned testing activities. It includes
reviewing and analyzing test results,
reporting defects,
identifying achieved or failed test objectives,
assessing test coverage, and
evaluating exit criteria.
23
Software Testing Strategy—The Big Picture
24
Software Testing Strategy
Unit testing begins at the vortex of the spiral and concentrates on each
unit (e.g., component, class, or WebApp content object) of the software
as implemented in source code.
Testing progresses by moving outward along the spiral to integration
testing, where the focus is on design and the construction of the
software architecture.
Taking another turn outward on the spiral, you encounter validation
testing, where requirements established as part of requirements
modeling are validated against the software that has been constructed.
Finally, you arrive at system testing, where the software and other
system elements are tested as a whole.
To test computer software, you spiral out in a clockwise direction along
streamlines that broaden the scope of testing with each turn
25
Software Testing Strategy
Unit testing makes heavy use of testing techniques that
exercise specific paths in a component’s control
structure to ensure complete coverage and maximum
error detection.
Integration testing addresses the issues associated
with the dual problems of verification and program
construction. Test case design techniques that focus on
inputs and outputs are more prevalent during integration,
although techniques that exercise specific program paths
may be used to ensure coverage of major control paths.
26
Software Testing Strategy
In Validation testing,Validation criteria (established
during requirements analysis) must be evaluated.
Validation testing provides final assurance that software
meets all informational, functional, behavioral, and
performance requirements.
Software, once validated, must be combined with other
system elements (e.g., hardware, people, databases).
System testing verifies that all elements mesh properly
and that overall system function/performance is achieved.
27
Unit Testing
Unit testing focuses verification effort on the smallest unit of
software design that is the software component or module.
Using the component-level design description as a guide,
important control paths are tested to uncover errors within
the boundary of the module.
The relative complexity of tests and the errors those tests
uncover is limited by the constrained scope established for unit
testing.
The unit test focuses on the internal processing logic and data
structures within the boundaries of a component.
This type of testing can be conducted in parallel for multiple
components.
28
Unit Testing
29
Unit Testing
Unit-test considerations.
The module interface is tested to ensure that information properly flows
into and out of the program unit under test.
Local data structures are examined to ensure that data stored
temporarily maintains its integrity during all steps in an algorithm’s
execution.
All independent paths through the control structure are exercised to
ensure that all statements in a module have been executed at least once.
Boundary conditions are tested to ensure that the module operates
properly at boundaries established to limit or restrict processing.
errors often occur when the nth element of an n-dimensional array is
processed, when the ith repetition of a loop with i passes is invoked,
when the maximum or minimum allowable value is encountered.
Test cases that exercise data structure, control flow, and data values just
below, at, and just above maxima and minima are very likely to uncover
30 errors.
Unit-test considerations (cont’d)
All error-handling paths are tested. When evaluating the e
(1) error description is unintelligible(vague),
(2) error noted does not correspond to error
encountered,
(3) error condition causes system intervention prior to
error handling,
(4) exception-condition processing is incorrect, or
(5) error description does not provide enough
information to assist in the location of the cause of the
error.
Data flow across a component interface is tested before
any other testing is initialized.
31
Integration Testing
32
Integration Testing
When the individual module is tested, all the modules are
integrated to make it as one software and testing is
performed at step by step integration level.
Errors may be introduced while the modules are
integrated because of the following reasons
the data passed between the modules may not reach as the
developer thinks.
one module may have inadvertant effect on other
small sub-functions may not give the expected result when
combined together
small errors in functions may lead to big error when combined
global data structure can present problems
33
Integration Testing
Integration testing is a systematic technique for
constructing the software architecture while at the same
time conducting tests to uncover errors associated with
interfacing.
The objective is to take unit-tested components and
build a program structure that has been dictated by
design.
“bigbang” approach - put all the components together
and test.
It is not an advisible approach, because we can’t isolate
the error source of error. One after the other error
crops up, and code becomes messy.
It is advisable to integrate the modules incrementally.
34
Types of Incremental Integration Testing
Top-down integration.
Modules are integrated by
moving downward through
the control hierarchy,
beginning with the main
control module
Modules subordinate to the
main control module are
incorporated into the
structure in either a
depth-first or breadth-first
manner.
35
Top-down integration.
36
Integration Testing
The integration process is performed in a series of five steps:
1. The main control module is used as a test driver and stubs
are substituted for all components directly subordinate to the
main control module.
2. Depending on the integration approach selected (i.e., depth or
breadth first), subordinate stubs are replaced one at a
time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is
replaced with the real component.
5. Regression testing may be conducted to ensure that new
errors have not been introduced.
37
Main problem in Top Down Integration
Stubs replace low-level modules at the beginning of
top-down testing; therefore, no significant data can flow
upward in the program structure.
As a tester, you are left with three choices:
(1) delay many tests until stubs are replaced with actual
modules,
(2) develop stubs that perform limited functions that
simulate the actual module, or
(3) integrate the software from the bottom of the
hierarchy upward.
38
Bottom-up integration
As the name implies the integration starts with the bottom
units(suborniate modules)
Hence, there is no need for stub.
Steps for Implementing Bottom Up integration
1. Low-level components are combined into clusters
(sometimes called builds) that perform a specific software
subfunction.
2. A driver (a control program for testing) is written to
coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving
upward in the program structure.
39
Bottom-up integration
Components are combined
to form clusters 1, 2, and 3. Each
of the clusters is tested using a
driver (shown
as a dashed block).
Components in clusters 1 and 2
are subordinate to Ma.
Drivers D1 and D2 are removed
and the clusters are interfaced
directly to Ma.
Similarly, driver D3 for cluster 3
is removed prior to integration
with module Mb.
Both Ma and Mb will ultimately
be integrated with component Mc,
and so forth.
40
Regression testing
Each time a new module is added as part of integration
testing, the software changes.
New data flow paths are established, new I/O may occur,
and new control logic is invoked.
These changes may cause problems with functions that
previously worked flawlessly.
Regression testing is the reexecution of some
subset of tests that have already been conducted
to ensure that changes have not propagated
unintended side effects.
Regression testing may be conducted manually or
automatically.
41
Regression testing
The regression test suite (the subset of tests to be
executed) contains three different classes of test cases:
• A representative sample of tests that will exercise all
software functions.
• Additional tests that focus on software functions that
are likely to be affected by the change.
• Tests that focus on the software components that have
been changed
42
Smoke testing
Smoke testing is a type of integration testing. The following activities are
performed in smoke test
1. Software components that have been translated into code are
integrated into a build. A build includes all data files, libraries, reusable
modules, and engineered components that are required to implement
one or more product functions.
2. A series of tests is designed to expose errors that will keep the build
from properly performing its function. The intent should be to uncover
“showstopper” errors that have the highest likelihood of throwing the
software project behind schedule.
3. The build is integrated with other builds, and the entire product (in
its current form) is smoke tested daily. The integration approach may be
top down or bottom up.
43
Benifits of Smoke testing
Integration risk is minimized
major errors are uncovered at the early stages of software
development
The quality of the end product is improved
it uncovers functional errors as well as architectural and
component-level design errors.
Error diagnosis and correction are simplified
Erros uncovered in smoke test is due to newly added built, so it
is easy to isolate the error and rectify it.
Progress is easier to assess.
With each passing day, more of the software has been
integrated and more has been demonstrated to work.
44
Validation testing
Software validation is achieved through a series of tests that demonstrate
conformity with requirements.
A test plan outlines the classes of tests to be conducted, and a test procedure
defines specific test cases that are designed to ensure that
all functional requirements are satisfied,
all behavioral characteristics are achieved,
all content is accurate and properly presented,
all performance requirements are attained, documentation is correct, and usability and
other requirements are met (e.g., transportability, compatibility, error recovery,
maintainability).
After each validation test case has been conducted, one of two possible
conditions exists:
(1) The function or performance characteristic conforms to specification and is
accepted or
(2) a deviation from specification is uncovered and a deficiencylist is created.
45
Validation testing
Configuration Review :
An important element of the validation processis a
configuration review.
The objective of the review is to ensure that all elements
of the software configuration have been properly
developed.
The configuration review, sometimes called an audit…
46
Acceptance Testing :
When custom software is built for one customer, a series
of acceptance tests are conducted to enable the
customer to validate all requirements.
Conducted by the end user rather than software
engineers, an acceptance test can range from an informal
“test drive” to a planned and systematically executed
series of tests.
In fact, acceptance testing can be conducted over a
period of weeks or months.
47
Alpha and Beta Testing :
If software is developed as a product to be used by many
customers, it is impractical to perform formal acceptance tests
with each one.
Most software product builders use a process called alpha and
beta testing to uncover errors that only the end user seems
able to find.
The alpha test is conducted at the developer’s site by a
representative group of end users.
The software is used in a natural setting with the developer
“looking over the shoulder” of the users and recording errors
and usage problems.
Alpha tests are conducted in a controlled environment.
48
Alpha and Beta Testing :
The beta test is conducted at one or more end-user
sites.
Unlike alpha testing, the developer generally is not
present. Therefore, the beta test is a “live” application of
the software in an environment that cannot be
controlled by the developer.
The customer records all problems (real or imagined)
that are encountered during beta testing and reports
these to the developer at regular intervals.
As a result of problems reported during beta tests, you
make modifications and then prepare for release of the
software product to the entire customer base.
49
Alpha Vs Beta Testing
Alpha Testing Beta Testing
It is done by internal testers of the It is done by real users.
organization.
It is an internal test, performed within the It is an external test, carried out in the user's
organization. environment.
Alpha Testing uses both black box and white Beta Testing only uses the black box testing
box testing techniques technique.
Identifies possible errors. Checks the quality of the product.
Developers start fixing bugs as soon as they are Errors are found by users and feedback is
identified. necessary.
Long execution cycles. It only takes a few weeks.
It can be easily implemented as it is done It will be implemented in the future version of
before the near end of development. the product.
It is performed before Beta Testing. It is the final test before launching the product
on the market.
It answers the question: Does the product It answers the question: Do customers like the
work? product?
Functionality and usability are tested. Usability, functionality, security and reliability
are tested with the same depth.
50
System Testing & its types
System testing is actually a series of different tests whose
primary purpose is to fully exercise the computer-based
system.
Although each test has a different purpose, all work to
verify that system elements have been properly
integrated and perform allocated functions.
Types of system tests :
Recovery Testing
Security Testing
Stress Testing
Performance Testing
Deployment Testing
51
System Testing & its types
Recovery Testing
Recovery testing is a system test that forces the software
to fail in a variety of ways and verifies that recovery is
properly performed.
If recovery is automatic (performed by the system itself),
reinitialization, checkpointing mechanisms, data recovery,
and restart are evaluated for correctness.
If recovery requires human intervention, the
mean-time-torepair (MTTR) is evaluated to determine
whether it is within acceptable limits
Security Testing
Security testing attempts to verify that protection
mechanisms built into a system.
52
System Testing & its types
Stress Testing
It executes a system in a manner that demands resources
in abnormal quantity, frequency, or volume.
Stress tests are designed to face programs with abnormal
situations. In essence, the tester who performs stress
testing asks: "How high can we crank this up before it
fails?"
variation of stress testing is a technique called sensitivity
testing.
53
System Testing & its types
Performance Testing
Test the run-time performance of software within the
context of an integrated system.
Performance tests are often coupled with stress testing.
Performance testing occurs throughout all steps in the
testing process. Even at the unit level, the performance of
an individual module may be assessed as tests are
conducted
54
System Testing & its types
Deployment Testing
In many cases, software must execute on a variety of
platforms and under more than one operating system
environment.
Deployment testing, sometimes called configuration
testing, exercises the software in each environment in
which it is to operate.
In addition, deployment testing examines all installation
procedures and specialized installation software (e.g.,
“installers”) that will be used by customers, and all
documentation that will be used to introduce the
software to end users.
55
WHITE-BOX TESTING
White box is a testing methodology to test the internal
structures and working of software.
White box testing also know as structural testing is
testing based on analysis of internal logic (design, code,
etc.).
(1) guarantee that all independent paths within a module
have been exercised at least once,
(2) exercise all logical decisions on their true and false
sides,
(3) execute all loops at their boundaries and within their
operational bounds, and
(4) exercise internal data structures to ensure their
validity.
56
Basis path testing
Determines the logical complexity measure of the
procedure/function
Using this measure(guideline) estimate the path for
testing
Flow Graph Notation
The flow graph depicts logical control flow using the
notation illustrated in Figure
57
• Each circle, called a flow graph node, represents one or more
procedural statements.
• A sequence of process boxes and a decision diamond can map into
a single node.
• The arrows on the flow graph, called edges or links, represent
flow of control and are analogous to flowchart arrows.
• An edge must terminate at a node, even if the node does not
represent any procedural statements
• Areas bounded by edges and nodes are called regions. When
58 counting regions, we include the area outside the graph as a
region.
Compound Predicate Representation
59
Cyclomatic Complexity
Cyclomatic complexity is a software metric that
provides a quantitative measure of the logical
complexity of a program.
Provides an upper bound for the number of tests
that must be conducted to ensure that all statements
have been executed at least once.
Complexity is computed in one of three ways:
1. The number of regions of the flow graph
corresponds to the cyclomatic complexity.
2. Cyclomatic complexity V(G) for a flow graph G is
defined as
V(G) = E - N + 2
11-9+2
where E is the number of flow graph edges and N is
the number of flow graph nodes.
61
62
Black-box testing
Black-box testing, also called behavioral testing, focuses
on the functional requirements of the software.
Black Box Testing is a software testing method in which
the functionalities of software applications are tested
without having knowledge of internal code structure,
implementation details and internal paths.
Black Box Testing mainly focuses on input and output of
software applications and it is entirely based on software
requirements and specifications. It is also known as
Behavioral Testing.
63
Black box testing
Black-box testing attempts to find errors in the following
categories:
1. incorrect or missing functions,
2. interface errors,
3. errors in data structures or external database access,
4. behavior or performance errors, and
5. initialization and termination errors.
64
Black box testing - Graph-Based Testing
Methods
you begin by creating a
graph—a collection of
nodes that represent
objects, links that represent
the relationships between
objects, node
weights that describe the
properties of a node (e.g., a
specific data value or state
behavior), and link weights
that describe some
characteristic of a link.
65
Graph-Based Testing Methods
A directed link (represented by an arrow) indicates
that a relationship moves in only one direction.
A bidirectional link, also called a symmetric link,
implies that the relationship applies in both directions.
Parallel links are used when a number of different
relationships are established between graph nodes.
In reality, a far more detailed graph would have to be
generated as a precursor to test-case design.
You can then derive test cases by traversing the graph
and covering each of the relationships shown.
66
Graph-Based Testing Methods
Graph Based Testing methods can be used for the following
Transaction flow modeling: Nodes represent steps in transaction,
link represents the logical connection between steps. Eg:
flightInformationSystem
Finite state modeling: The nodes represent different
user-observable states of the software (e.g., each of the “screens” that
appear as an order entry clerk takes a phone order), and the links
represent the transitions that occur to move from state to state
Data flow modeling: The nodes are data objects, and the links are
the transformations that occur to translate one data object into
another.
Timing modeling: The nodes are program objects, and the links are
the sequential connections between those objects. Link weights are
used to specify the required execution times as the program executes.
67
Equivalence Partitioning
Equivalence partitioning is a black-box testing method that divides the
input domain of a program into classes of data from which test cases
can be derived.
An equivalence class represents a set of valid or invalid states for input
conditions. Typically, an input condition is either a specific numeric value,
a range of values, a set of related values, or a Boolean condition.
Equivalence classes may be defined according to the following
guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence
classes are defined.
2. If an input condition requires a specific value, one valid and two invalid
equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid
equivalence class are defined.
4. If an input condition is Boolean, one valid and one invalid class are defined.
68
Equivalence Partitioning - Example
The discount is calculated depending on the total amount of
the shopping cart. If the total amount is in the range of
$100–$200, the discount is 10%. If the total amount is in the
range of $201–$500, the discount is 20%. If the total amount
is more than $500, the discount is 30%. In this scenario, we
can identify three valid partitions and one invalid partition
for the amount under $100.
69
Boundary value analysis
These boundaries are breaking points that are most likely
to be incorrect.
Boundary value analysis leads to a selection of test cases
that exercise bounding values.
1. If an input condition specifies a range bounded by
values a and b, test cases should be designed with values
a and b and just above and just below a and b.
Eg: If a<100 and a>50
2. If an input condition specifies a number of values, test
cases should be developed that exercise the minimum
and maximum numbers.Values just above and below
minimum and maximum are also tested.
Eg: Loop with min and max value
70
Boundary value analysis
If the total amount is in the range of $100–$200, the
discount is 10%. If the total amount is in the range of
$201–$500, the discount is 20%. If the total amount is
more than $500, the discount is 30%. With a simple
illustration, we can define the boundaries very easily.
71
Orthogonal Array Testing
Orthogonal array testing can be applied to problems in
which the input domain is relatively small but too large
to accommodate exhaustive testing.
The orthogonal array testing method is particularly
useful in finding region faults.
Consider a system that has three input items, X,Y, and
Z. Each of these input items has three discrete values
associated with it. There are 3^3 = 27 possible test
cases.
72
How to do Orthogonal Array Testing:
Examples
Identify the independent variable for the scenario.
Find the smallest array with the number of runs.
Map the factors to the array.
Choose the values for any “leftover” levels.
Transcribe the Runs into test cases, adding any
particularly suspicious combinations that aren’t
generated.
73
A microprocessor’s functionality has to be tested:
74
1. Columns with the No. of factors
2. Enter the number of rows is equal to levels per factor. i.e
temperature has 3 levels. Hence, insert 3 rows for each level for
temperature,
3. Now split up the pressure, doping amount and the deposition rates
in the columns.
75
the result of tests using
the L9 orthogonal array in
the following manner:
Detect and isolate all
single mode faults
Detect all double mode
faults.
Multimode faults.
76
Debugging
Debugging occurs as a
consequence of successful
testing.
Debugging is the process
that results in the
removal of the error.
The debugging process will
usually have one of two
outcomes:
(1) the cause will be found
and corrected or
Debugging has one objective— to
(2) the cause will not be find and correct the cause of a
found. software error or defect.
77
Debugging
Three debugging strategies
(1) brute force,
This is the last option to use, if all the other methods fail.
(2) backtracking, and
Beginning at the site where a symptom has been uncovered, the
source code is traced backward (manually) until the cause is
found.
It is suitable for small projects.
(3) cause elimination.
A “cause hypothesis” is devised and the relevant input data are
used to prove or disprove the hypothesis.
78
Debugging
Automated debugging. Each of these debugging approaches
can be supplemented with debugging tools that can provide you
with semiautomated support as debugging strategies are
attempted.
Integrated development environments (IDEs) provide a way to
capture some of the languagespecific predetermined errors
(e.g., missing end-of-statement characters, undefined variables,
and so on) without requiring compilation.” A wide variety of
debugging compilers, dynamic debugging aids (“tracers”),
automatic test-case generators, and cross-reference mapping
tools are available.
However, tools are not a substitute for careful evaluation based
on a complete design model and clear source code.
79
Debugging
Correcting the Error:
Before correcting the bug, one should ask the following
questions
Is the cause of the bug reproduced in another part of the program?
Most of the time, a similar logical pattern is used in other part of the
programs also. Finding the erranous patter will uncover other similar errors.
What “next bug” might be introduced by the fix I’m about to make?
Check the logic coupling and data coupling involved?
If the coupling is heavy then care should be taken to fix the bug.
What could we have done to prevent this bug in the first place?
If you correct the process as well as the product, the bug will be removed
from the current program and may be eliminated from all future programs.
80
END OF UNIT IV
81