Lecture Notes Software Engineering
Lecture Notes Software Engineering
SOFTWARE ENGINEERING
Chapter - 1
Introduction to Software Engineering 6-27
• Relevance of software engineering
• Software characteristics and applications
• Emergence of software engineering.
• Early computer programming high level language programming
control flow based design data flow oriented design data structure
oriented design object and component bases design
• Software life cycle models
• Classical water fall and iterative water fall models
• Prototyping
• Evolutionary model
• Spiral model
Chapter – 2
Chapter - 3
Chapter - 4
Understanding the Principles and Methods of S\W Design 71-90
• Importance of S/W Design
• Design principles and Concepts
• Cohesion and coupling
• Classification of cohesiveness
• Classification of coupling
• S/W design approaches
• Structured analysis methodology
• Use of DF diagram
• List the symbols used in DFD
• Construction of DFD
• Limitations of DFD
• Uses of structured of chart and structured design
• Principles of transformation of DFD to structured chart
• Transform analysis and transaction analysis
• Object oriented concepts
• Object oriented and function oriented design
Chapter - 5
Chapter - 6
Chapter-7
Chapter-8
Understanding the Computer Aided Software
Engineering (CASE) 135-143
Contents
Chapter - 1 Introduction to
Software Engineering
Software engineering is the field of computer science that deals with the
building of software systems which are so large or so complex that they
are build by a team or teams of engineers.
Object-Oriented Design
• Software Analysis
• Software Design
• Software Testing
• Software Maintenance
A software life cycle model is referred to as software process model.
The phases starting from the feasibility study to the integration and
system testing phases are known as the development phases. All these
activities are performed in a set of sequence without skip or repeat.
None of the activities can be revised once closed and the results are
passed to the next step for use.
Feasibility Study
carried out on these data, the output data required to be produced by the
system.
Technical Feasibility
Can the work for the project be done with current equipment, existing
software technology and available personnel?
Economic Feasibility
Are there sufficient benefits in creating the system to make the costs
acceptable?
Operational Feasibility
This activity consists of first gathering the requirements and then analyzing
Requirements Specification
During the integration and system testing phase the different modules
are integrated in a planned manner. Integration of various modules are
normally carried out incrementally over a number of steps. During
each integration step previously planned modules are added to the
partially integration system and the resultant system is tested. Finally,
after all the modules have been successfully integrated and tested
system testing is carried out.
Maintenance
• Corrective Maintenance
This type of maintenance involves correcting error that were not
discovered during the product development phase.
• Perfective Maintenance
• Prototyping Model
Prototyping is an attractive idea for complicated and large systems for
which there is no manual process or existing system to help to
determine the requirements.
• Evolutionary Model
This life cycle model is also referred as the successive versions model
and the incremental model. In this life cycle model the software is first
broken down into several modules or functional units which can be
incrementally constructed and delivered.
• Spiral Model
The spiral model also known as the spiral life cycle model is a systems
development life cycle model used in information technology. This
model of development combines the features of the prototyping model,
the waterfall model and other models. The diagrammatic representation
of this model appears like a spiral with many loops.
Fig. 1.6 Spiral Model of Software Development
In the spiral model of development, the project team must decide how
exactly to structure the project into phases. The most distinguishing
feature of this model is its ability to handle risks. The spiral model uses
prototyping as a risk reduction mechanism and also retains the
systematic step-wise approach of the waterfall model.
Contents
• The Product
• The Process
• The Project
People
Project
Product
• Project Management
Project Planning
Size estimation is the first activity. The size is the key parameter for the
estimation of other activities. Other components of project planning are
estimation of effort, cost, resources and project duration.
Disadvantages:
• LOC is language dependent. A line of assembler is not the same
as a line of COBOL.
• LOC measure correlates poorly with the quality and efficiency of
the code. A larger code size does not necessary imply better
quality or higher efficiency.
• LOC metrics penalizes use of higher level programming
languages, code reuse etc.
• It is very difficult to accurately estimate LOC in the final product
from the problem specification. The LOC count can be accurately
computed only after the code has been fully developed.
• Number Of Inputs:
Each data item input by the user is counted.
• Number Of Outputs:
The outputs refers to reports printed, screen outputs, error messages
produced etc.
• Number Of Inquiries:
It is the number of distinct interactive queries which can be made by
the users.
• Number Of Files:
Each logical file is counted. A logical file means groups of
logically related data. Thus logical files can be data structures or
physical files.
• Number Of Interfaces:
Here the interfaces which are used to exchange information with
other systems. Examples of interfaces are data files on tapes, disks,
communication links with other systems etc.
Function Point (FP) is estimated using the formula:
FP = UFP (Unadjusted Function Point) * TCF (Technical Complexity
Factor)
UFP = (Number of inputs) * 4 + (Number of outputs) * 5 + (Number
of inquiries) * 4 + (Number of files) * 10 + Number of
interfaces) * 10
TCF = DI (Degree of Influence) * 0.01
The unadjusted function point count (UFP) reflects the specific
countable functionality provided to the user by the project or
application.
Example- Once the unadjusted function point (UFP) is computed,
the technical complexity factor (TCF) is computed next. The TCF
refines the UFP measure by considering fourteen other factors such
as high transaction rates, throughput and response time requirements
etc. Each of these 14 factors is assigned a value from 0 (not present
or no influence) to 6 (strong influence). The resulting numbers are
summed, yielding the total degree of influence (DI). Now, the TCF is
computed as (0.65+0.01*DI). As DI can vary from 0 to 70, the TCF
can vary from 0.65 to 1.35.
Finally FP = UFP *TCF
• Project duration
• Effort required to develop the software
There are three broad categories of estimation techniques:
• Empirical estimation techniques
• Heuristic techniques
• Analytical estimation techniques
Heuristic Techniques
• Multivariable model
A single variable estimation model takes the following
form: Estimated parameter = c1* ed1
Where e is a characteristics of the software, c1 and d1 are constants.
The most widely used cost estimation technique is the expert judgment,
which is an inherently top-down estimation technique. In this approach
an expert makes an educated guess of the problem size after analyzing
the problem thoroughly. The expert estimates the cost of the different
modules or subsystems and then combines them to arrive at the
overall estimate.
Basic COCOMO
Intermediate COCOMO
The basic COCOMO model allowed for a quick and rough estimate, but
it resulted in a lack of accuracy. Basic model provides single-variable
(software size) static estimation based on the type of the software. A
host of the other project parameters besides the product size affect the
effort required to develop the product as well as the development time.
• Product attributes
• Computer attributes
• Personnel attributes
• Development environment
Product
The characteristics of the product data considered include the inherent
complexity of the product, reliability requirements of the product,
database size etc.
Computer
The characteristics of the computer that are considered include the
execution speed required, storage space required etc.
Personnel
The attributes of development personnel that are considered include
the experience level of personnel, programming capability, analysis
capability etc.
Development Environment
The development environment attributes capture the development
facilities available to the developers.
A critical path is the chain of activities that determine the duration of the
project.
After the project manager has broken down the task and created the
work breakdown structure, he has to find the dependency among the
activities. Dependency among the different activities determines the
order in which the different activities would be carried out. If an activity
A requires the results of another activity B, then activity A must be
scheduled after activity B. The task dependencies define a partial
ordering among tasks.
Once the activity network representation has been worked out,
resources are allocated to each activity. Resource allocation is typically
done using a Gantt chart. After resource allocation is done, a Project
Evaluation and Review Technique chart representation is developed.
The PERT chart representation is suitable for program monitoring and
control.
Most project control techniques are based on breaking down the goal of
the project into several intermediate goals. Each intermediate goal can
be broken down further. This process can be repeated until each goal is
small enough to be well understood.
• A path from the node to the finish node containing only critical
tasks is called a critical path.
• The above parameters for different tasks for the MIS problem
(Fig.2.4) are shown in the following table.
Task ES EF LS LF ST
Specification Part 0 15 0 15 0
Design Database Part 15 60 15 60 0
Design GUI Part 15 45 90 120 75
Code Database Part 60 165 60 165 0
Code GUI Part 45 90 120 165 75
Integrate and Test 165 285 165 285 0
White User Manual 15 75 225 285 210
The critical paths are all the paths whose duration equals MT. The
critical path in Fig.2.4 is shown with thick arrow lines.
Gantt Chart
Gantt charts are a project control technique that can be used for several
purposes including scheduling, budgeting and resource planning. Gantt
Charts are mainly used to allocate resources to activities. A Gantt
chart is a
special type of bar chart where each bar represents an activity. The bars
are drawn against a time line. The length of each bar is proportional to
the duration of the time planned for the corresponding activity.
Fig. 2.5 Gantt Chart Representation of the MIS Problem
In the Gantt Chart the bar consists of a write part and a shaded part. The
shaded part of the bar shows the length of time each task is estimated to
take. The white part shows the slack time, that is the latest time by
which a task must be finished.
PERT controls time and cost during the project and also facilities
finding the right balance between completing a project on time and cost
during the project and also facilitates finding the right balance between
completing a project on time and completing it within a budget.
A PERT Chart is a network of boxes (or circles) and arrows. The boxes
represent activities and the arrows are used to show the dependencies of
activities on one another. The activity at the head of an arrow cannot
start until the activity at the tail of the arrow is finished. The boxes in a
PERT Chart can be decorated with starting and ending dates for
activities. PERT Chart is more useful for monitoring the timing progress
of activities.
PERT Chart shows the interrelationship among the tasks in the project
and identifies critical path of the project.
• Organisation Structure
Requirements
Design
Coding
Testing
Project
Management
Maintenance
In the project format, a set of engineers are assigned to the project at the
start of the project and they remain with the project till the completion
of the project. Thus, the same team carries out all the life cycle
activities. Obviously, the functional format requires more
communication among teams than the project format, because one team
must understand the work done by the previous teams. The main
advantages of a functional organization are:
• Ease of staffing
• Production of good quality documents
• Job specialization
• Efficient handling of the problems associated with manpower turnover
The functional organisation allows engineers to become specialists in
their particular roles, e.g. requirements analysis, design, coding, testing,
maintenance etc. the functional organisation also provides an efficient
solution to the staffing problem. A project organisation structure forces
the manager to take in almost a constant number of engineers for the
entire duration of the project.
• Team Structure
• Democratic
• Mixed team organization
(Software engineers)
Fig. 2.8 Chief programmer team structure
Democratic Team
The democratic team structure does not enforce any formal team
hierarchy. Typically a manager provides the administrative leadership.
At different times, different members of the group provide technical
leadership.
The mixed team organization draws upon the ideas from both the
democratic organization and the chief programmer organization. This
team organization incorporates both hierarchical reporting and
democratic set-up.
Communication
The mixed control team organization is suitable for large team sizes.
The democratic arrangement at the senior engineers level is used to
decompose the problem into small parts. Each democratic set-up at the
programmer level attempts to find solution to a single part. This team
attempts to find solution to a single part. This team structure is
extremely popular and is being used in many software development
companies.
• Importance of Risk Identification, Risk Assessment
and Risk Containment with reference to Risk
Management
Risk management is an emerging area that aims to address the problem
of identifying and managing the risk associated with a software project.
Risk in a project is the possibility that the defined goals are not met.
The basic motivation of having risk management is to avoid heavy
looses.
• Risk identification
• Risk assessment
• Risk containment
Risk Identification
Risks Assessment
After all the identified risk of a project is assessed, plans must be made
to contain the most damaging and the most likely risks. Three main
strategies used for risks containment are:
• Avoid the risk
• Risk reduction
• Transfer the risk
Chapter-3
Understanding the need of Requirement Analysis
Contents
• Problem recognition
• Modeling
• Specification
• Review
• Principles of Analysis
All analysis methods are related by a set of operational principles:
• Software Prototyping
The prototyping paradigm can be either close-ended or open-ended. The
close-ended approach is called throwaway prototyping and an open-
ended approach called evolutionary prototyping.
• Prototyping Approach
Throwaway prototyping: Prototype only used as a demonstration of product
requirements.
• SRS Document
The requirements analysis and specification phase starts once the
feasibility study phase is completed and the project is found to be
financially sound and technically feasible. The goal of the requirement
analysis and specification phase is to clearly understand the customer
requirements and to systematically organize these requirements in a
specification document. This phase consists of two activities:
• Requirements gathering and analysis.
• Requirements specification
• Anomaly
• Inconsistency
• Incompleteness
After the analyst has collected all the required information regarding the
software to be developed and has removed all incompleteness,
inconsistencies and anomalies from the specification, analyst starts to
systematically organize the requirements in the form of an SRS
document. The SRS document usually contains all the user
requirements in an informal form.
Different People need the SRS document for very different purposes.
Some of the important categories of users of the SRS document and
their needs are as follows.
• Users, customers and marketing personnel
They want to ensure that they can estimate the cost of the project
easily by referring to be SRS document and that it contains all
information required to plan the project.
• Maintenance Engineers
• Functional Requirements
• Nonfunctional Requirements
• Goals of implementation
The functional requirements of the system as documented in the SRS
document should clearly describe each function which the system
would support along with the corresponding input and output data
set.
Fig. 3.1 Contents of SRS Document
Block-box View: It should specify what the system should do. The SRS
document should specify the external behavior of the system and not
discuss the implementation issues. The SRS should specify the
externally visible behavior of the system. [For this reason the SRS
document is called the block-box specification of a system.]
Organization of the SRS document and the issues depends on the type
of the product being developed. Three basic issues of SRS documents
are: functional requirements, non functional requirements, and
guidelines for system implementations. The SRS document should be
organized into:
• Introduction
• Background
• Overall Description
(c)Environmental
Characteristics
(i)Hardware
(ii)Peripherals
(iii)People
Goals of implementation
Functional requirements
Nonfunctional
Requirements
Behavioural
Description
• System States
Chapter-4
Understanding the Principles and Methods of S\W Design
Contents
identified in the design document. The relationship is also known as the call
relationship.
• Interface among different modules. The interface among
different modules identifies the exact data items
exchanged among the modules.
• Data structures of the individual modules.
• Algorithms required to implement the individual modules.
Design Concepts
Detailed Design
During detailed design, the data structure and the algorithms of different
modules are designed. The outcome of the detailed design stage is
usually known as the module specification document.
Modularity
Clean Decomposition
The modules in a software design should display high cohesion and low
coupling. The modules are more or less independent of each other.
Layered Design
• Classification of Cohesiveness
Coincidental Cohesion
When a module contains functions that are related by the fact that all
the functions must be executed in the same time span, the module is
said to exhibit temporal cohesion. For example, consider the situation:
when a computer is booted, several functions need to be performed.
These include initialization of memory and devices, loading the
operating system etc. When a single module performs all these tasks,
then the module can be said to exhibit temporal cohesion.
Procedural Cohesion
• Classification of Coupling
Stamp Coupling
Two modules are stamp coupled, if they communicate using a
composite data item such as a structure in C.
Control Coupling
Module A and B are said to be control coupled if they communicate
by passing of control information.
Common Coupling
Two modules are common coupled, if they share some global data items.
Content coupling
Top-down decomposition
In top-down decomposition, starting at a high-level view of the system,
each high-level function is successfully refined into more detailed
functions.
Ex Consider a function create-new-library member which essentially creates
• create-member-record
• print-bill
Each of these sub functions may be split into more detailed sub
functions and so on.
Object Oriented Design
• Construction of DFD
A DFD model of a system graphically represent how each input data is
transformed to its corresponding output data through a hierarchy of
DFDs.
A DFD start with the most abstract definition of the system (lowest
level) and at each higher level DFD, more details are successively
introduced. The most abstract representation of the problem is also
called the context diagram.
Context Diagram
The context diagram represents the entire system as a single bubble.
The bubble is labelled according to the main function of the system.
The various external entities with which the system interacts and the
data flows occurring between the system and the external entities are
also represented. The data input to the system and the data output from
the system are represented as incoming and outgoing arrows.
Level 1 DFD
The level 1 DFD usually contains between 3 and 7 bubbles. To develop
the Level 1 DFD, examine the high-level functional requirements. If
there are between 3 to 7 high-level functional requirements, then these
can be directly represented as bubbles in the Level 1 DFD. We can
examine the input data to these functions and the data output by these
functions and represent them appropriately in the diagram. If a system
has more than seven high-level requirements, then some of the related
requirements have to be combined and represented in the form of a
bubble in the Level 1 DFD.
Decomposition
Each bubble in the DFD represents a function performed by the system.
The bubbles are decomposed into sub functions at the successive level
of the DFD. Each bubble at any level of DFD is usually decomposed
between three to seven bubbles. Decomposition of a bubble should be
carried out on until a level is reached at
Example: Student admission and examination
system This statement has three modules,
namely
• Registration module
• Examination module
• Result generation
module Registration
module:
An application must be registered, for which the applicant should pay the
required registration fee. This fee can be paid through demand draft or
cheque drawn from a nationalized bank. After successful registration an
enrolment number is allotted to each student, which makes the student
eligible to appear in the examination.
Examination module:
Level 1 DFD
Level 2 DFD
Student Registered
Mark Sheet
Marks Detail
• Limitations of DFD
• A data flow diagram does not show flow of control. It does not
show details linking inputs and outputs within a transformation.
It only shows all possible inputs and outputs for each
transformation in the system.
• The method of carrying out decomposition to arrive at the
successive level and the ultimate level to which decomposition is
carried out are highly subjective and depend on the choice and
judgement of the analyst. Many times it is not possible to say
which DFD representation is superior or preferable to another.
• The data flow diagram does not provide any specific guidance as
to how exactly to decompose a given function into its
subfunctions.
• Size of the diagram depends on the complexity of the logic.
• Structured Design
The aim of structured design is to transform the results of the structured
analysis that is a DFD representation into a structured chart. A
structured chart represents the software architecture i.e. The various
modules making up the system, the module dependency and the
parameters that are passed among the different modules. The structure
chart representation can be easily implemented using some
programming language. Since the main focus in a structure chart
representation is on module structure of a software and the interaction
among the different modules. The procedural aspects are not
represented in a structured design. The basic building blocks which are
used to design structure charts are:
Transform analysis
Transaction analysis
Normally, one starts with the level 1 DFD, transforms in into module
representation using either the transform or the transaction analysis and
then proceeds towards the lower-level DFDs. At each level of
transformation, first determine whether the transform or the transaction
analysis is applicable to a particular DFD.
Transform Analysis
• Logical processing
• Output
The input portion in the DFD includes processes that transform input
data from physical to logical form. Each input portion is called an
afferent branch. The output portion of a DFD transforms output data
from logical form to physical form. Each output portion is called an
efferent branch. The remaining portion of a DFD is called central
transform.
For each identified transaction, we trace the input data to the output. In
the structure chart, we draw a root module and below this module we
draw each identified transaction of a module.
Chapter-5
Understanding the Principles of User Interface Design
• Design for different interaction with objects that appear on the screen.
Reduce the User’s Memory Load
Principles that enable an interface to reduce the user’s memory load are:
• Reduce demand on short-term memory.
• Establish meaningful defaults.
• Define shortcuts that are intuitive.
• Disclose information in a progressive fashion.
• Design model
• Mental model
• Implementation model
A software engineer establishes a user model, the software engineer
creates a design model, the end-user develops a mental image that is
often called the user’s model or the system perception, and the
implementation of the system create a system image.
The design process for user interfaces is iterative and can be represented
using a spiral model. The user interface design process encompasses
four distinct activities
• User, task, and environment analysis and modelling
• Interface design
• Interface construction
• Interface validation
The initial analysis activity focuses on the profile of the users who will
interact with the system. Skill level and business understanding are
recorded and different user categories are defined. The software
engineer attempts to understand the system perception for each class of
users.
Once general requirements have been defined, a more detailed task
analysis is conducted. Those tasks that the user performs to accomplish
the goals of the system are identified, described and elaborated.
The goal of interface design is to define a set of interface objects and
actions that enable a user to perform all defined tasks that meets every
usability goal defined for the system.
Design Issues
Four common design issues are:
• System response time
• User help facilities
• Error information handling and
• Command labelling
System response time is the primary complaint for many interactive
applications. System response time is measured from the point at which
the user performs some control action until the software responds with
desired output or action. Two important characteristics of system
response time are length and variability.
Two different types of help facilities are integrated and add-on. An
integrated help facility is designed into the software from the beginning.
An add-on help facility is added to the software after the system has
been built. User help facilities must be addressed: when it is available,
how it is accessed, how it is represented to the user, how it is structured,
what happens when help is exited.
An effective error message can do much to improve the quality of an
interactive system and will significantly reduce user frustration when
problems do occur. Every error message or warning produced by an
interactive system should have the following characteristics:
• The message should describe the problem in simple language that
a user can easily understand.
• The message should provide constructive advice for recovering
from the error.
Scrolling Menu
When a full choice list cannot be displayed within the menu area,
scrolling of the menu items is required. This enables the user to view
and select the menu items that cannot be accommodated on the screen.
Fig.5.1 Font size selecting using scrolling menu
Walking Menu
Walking menu is a very commonly used menu to structure a large
collection of menu items. In this technique, when a menu item is
selected, it causes further menu items to be displayed adjacent to it in a
sub-menu. A walking menu can be successfully used to structure
commands only if there are limited choices since each adjacently
displayed menu does take up screen space and the total screen area,
after all, is limited.
Fig.5.2 Examples of walking menu
Hierarchical Menu:
In this technique, the menu items are organized in a hierarchy or tree
structure. Selecting a menu item causes the current menu display to be
replaced by an appropriate sub-menu. Walking menu can be considered
to be a form of hierarchical menu. Hierarchical menu, on the other
hand, can be used to manage a large number of choices, but the users
are likely to face navigational problems and therefore lose track of their
whereabouts in the menu tree. This probably is the main reason why
this type of interface is very rarely used.
Direct Manipulation Interfaces
Direct manipulation interfaces present the interface to the user in the
form of visual models i.e. icons. This type of interface is called as iconic
interface. In this type of interface, the user issues commands by
performing actions on the visual representations of the objects.
The advantages of iconic interfaces are that the icons can be recognised
by the users very easily and icons are language-independent.
• Main aspects of Graphical UI, Text based
Chapter -6
Understanding the Principles of Software Coding
Rules for limiting the use of global: These rules list what types of data
can be declared global and what cannot.
• The team performing the code walkthrough should not be either too
big or too small. Ideally, it should consist of three to seven
members.
• Discussions should focus on discovery of errors and not on how
to fix the discovered errors.
• Unit Testing
Stubs and drivers are design to provide the complete for a module.
Global Data
Fig. 6.2 Unit testing with the help of driver and stub module
derive test cases. It the most widely utilized unit testing to determine all
possible path with in a module, to execute all looks and to test all
logical expressions. This form of testing concentrate on procedural
detail.
The general outline of the white-box testing process is:
• Perform risk analysis to guide entire
testing process.
• Develop a detailed test plan that organizes the subsequence
testing process.
• Prepare the test environment for test execution.
Statement Coverage
This statement coverage strategy aims to design test cases so that every
statement in a program is executed at least once. The principle idea
governing the statement coverage strategy is that unless a statement is
executed there is no way to determine whether an error exist in that
statement unless a statement is executed, we cannot observe whether it
causes failure due to some illegal memory access, wrong result
computation etc.
Example:
Consider Euclid’s GCD computation
algorithm: Int compute_gcd(x,y)
Int x,y;
{
• While (x != y) {
• If (x > y) then
• x = x − y;
• else y = y – x;
5 }
6 return x;
}
Design of test cases for the above program segment
Test case1 Statement executed
x=5,y=5 1,5,6
Test case2 Statement executed
x=5,y=4 1,2,3,5,6
Test case3 Statement executed
x=4,y=5 1,2,4,5,6
so the test set of the above algorithm will be
{(x=5,y=5),(x=5,y=4),(x=4,y=5)}.
Branch Coverage
In the branch coverage-based testing strategy, test cases are designed to
make each branch condition assume true and false value in turn. Brach
testing is also known as edge testing, which is stronger than statement
coverage testing approach.
Condition Coverage
In this structural testing, test cases are designed to make each
component of a composite conditional expression assumes both true and
false values. For example, in the conditional expression (( C1 AND C2 )
OR C3 ), the components C1,C2 andC3 are each made to assume both
true and false values. Condition testing is a stronger testing strategy
than branch testing and branch testing is a stronger testing strategy than
the statement coverage- based testing.
Path Coverage
The path coverage-based testing strategy requires designing test cases
such that all linearly independent paths is the program are executed at
least once. A linearly independent path can be defined in the terms of
the control flow graph (CFG) of a program.
Control Flow Graph (CFG)
A control flow graph describes the sequence in which the different
instructions of a program get executed. The flow graph is a directed
graph in which nodes are either entire statement or fragments of a
statement and edges
represents flow of control. An edge from one node to another exists if
the execution of the statement representing the first node can result in
the transfer of control to the other node.
A flow graph can easily be generated from the code of any problem.
Path
A path through a program is a node and edge sequence from the starting
node to a terminal node of the control flow graph of a program.. A
program can have more than one terminal nodes when it contains
multiple exit or return type of statements.
Example:
Number of Edges = E = 7
Number of Nodes = N = 6
The value of cyclomatic complexity is
V(G) = E – N + 2
=7–6+2
= 3
Data Flow – Based Testing
The data flow – based testing method selects the test paths of a program
according to the location of the definitions and use of the different
variables in a program.
Consider a program P. For a statement numbered S of P,
let DEF (S) = {X | Statement S contains a definition of
X}, and
• Debugging Approaches
Performance Testing
Performance testing is carried out to check whether the system meets
the non – functional requirements identified in the SRS document. The
types of performance testing to be carried out on a system depend on
the different nonfunctional requirements of the system document in the
SRS document. All performance tests can be considered as black – box
tests.
Stress Testing
Stress testing is also known as endurance testing. Stress testing
evaluated system performance when it is stressed for short periods of
time. Stress tests are black – box tests which are designed to impose a
range of abnormal and even illegal input conditions so as to stress the
capabilities of the software. Input data volumes, input data rate,
processing time, utilization of memory are tested beyond the designed
capacity.
Stress testing is especially important for systems that usually operate
below the maximum capacity but are severely stressed at some peak
demand hours. Example : If the nonfunctional requirement
specification states that the response time should not be more than 20
seconds per transaction when 60 concurrent users are working, then
during the stress testing the response time is checked with 60 users
working simultaneously.
Volume Testing
Volume testing checks whether the data structures (buffers, arrays,
queues, stacks etc.) have been designed to successfully handle
extraordinary situations.
Example : A compiler might be tested to check whether the symbol table
overflows when a very large program is compiled.
Configuration Testing
Configuration testing is used to test system behavior in various hardware
and software configuration specified in the requirements.
Compatibility Testing
=> N = S * n /s
=> (N-n) = n(S-s) /S
failure occurs one has to either replace or repair the failed part. A
software product would continue to fail until the error is tracked down
and either the design or the code is changed. For this reason, when level
that existed before the failure accrued, whereas when a software failure
There are three phases in the life of any hardware component i.e. burn
in, useful life and wear out.
In burn in phase, failure rate is quite high initially as it starts decreasing
as the faulty components are identified and removed. The system then
enters its useful life.
During useful life period, failure rate is approximately constant. Failure
rate increases in wear- out phase due to warning out components. The
best period is useful life period. The shape of this curve a “both- tub”
and it is also known as both tub curve.
For software the failure rate is highest during integration and testing
phases. During the testing phase more and more errors are identified
and moved resulting in a reduced failure rate. This errors removal
continues at a slower speed during the useful life of the product. As the
software becomes absolute, no more error correction occurs and the
failure rate remains unchanged.
• Distinguish between the Different Reliability Metrics
This model allows for negative reliability growth to reflect the fact that
when a repair is carried out, it may introduce additional errors. It also
models the fact that as errors are repaired, the average improvement
in reliability per repair decreases. It treats an error's contribution to
reliability improvement to be an independent random variable
Quality system have rapidly evolved over the last 5 decades. The
quality systems of organisation have undergone through 4-stages of
evolution as :
• To increase productivity
• Toolkits
• Language-centered
• Integrated
• Fourth generation
• Process-centered
Since different tools covering different stages share common
information. It is required that they integrate through some central
repository to have a consistent view of information associated with the
software.
Benefits of CASE
• A key benefit arising use of a CASE environment is cost saving
through all development phases.
• Use of CASE tools leads to considerable improvements to quality.
• CASE tools help produce high quality and consistent document
since the important data relating to a software product are
maintained in a central repository, redundancy in the stored data
is reduced and therefore chances of inconsistence documentation
are reduce to a great extent.
• CASE tools have led to drudgery in a software engineer’s work.
• CASE tools have led to revolutionary cost savings in software
maintenance efforts.
• Use of a CASE environment has an impact on the style of
working of a company, and makes it conscious of structured and
orderly approach.
• Hardware Platform
• Operating System: Database and object management services.
• Portability services: Allow CASE tools and their integration
framework to migrate across different operating systems and
hardware platforms without significant adaptive maintenance.
• Integration framework: It is collection of specialized programs that
allow CASE tools to communicate with one another .
• CASE Tools : A CASE tool can be used quite effectively, even if it
is a point solution.
• Prototyping Support
• Code Generation
As for as code generation is concerned, the general expectation
from a CASE tool is quite low. Pragmatic support expected from a
CASE tools during code generation phase are :
• The CASE tools should support generation of module
skeletons or templates in one or more popular
programming languages. It should be possible to
include copyright message, brief
The CASE tool for test case generation should have the following
features :
• It should support both design and requirement testing.
• It should generate test set reports in ASCII format which
can be directly imported into the test plan document.
• List the Different CASE Tools
• Business process engineering tools : Represent business
data objects,
their relationships, and flow of the data objects between
company business areas
• Data acquisition
• get data for testing
• Static measurement
Reference Books
• Fundamentals of Software
Engineering By
Rajib Mall
Prentice Hall of
India
• Software Engineering A Practitioner’s Approach
By
Roger S.
Pressman McGraw-Hill
International Edition