Bca 303se
Bca 303se
Bca 303se
BCA - 303
SOFTWARE ENGINEERING
BCA - 303
This SIM has been prepared exclusively under the guidance of Punjab Technical
University (PTU) and reviewed by experts and approved by the concerned statutory
Board of Studies (BOS). It conforms to the syllabi and contents as approved by the
BOS of PTU.
Reviewer
All rights reserved. No part of this publication which is material protected by this copyright notice
may be reproduced or transmitted or utilized or stored in any form or by any means now known or
hereinafter invented, electronic, digital or mechanical, including photocopying, scanning, recording
or by any information storage or retrieval system, without prior written permission from the Publisher.
Information contained in this book has been published by VIKAS® Publishing House Pvt. Ltd. and has
been obtained by its Authors from sources believed to be reliable and are correct to the best of their
knowledge. However, the Publisher and its Authors shall in no event be liable for any errors, omissions
or damages arising out of use of this information and specifically disclaim any implied warranties or
merchantability or fitness for any particular use.
Section-I
Software: Characteristics, Components Applications, Software Process Unit 1: Software Process
Models: Waterfall, Spiral, Prototyping, Fourth Generation Techniques, and Process Models
Concepts of Project Management, Role of Metrics And Measurement. (Pages 3-31)
Section-II
S/W Project Planning: Objectives, Decomposition Techniques: S/W Unit 2: Software Project
Sizing, Problem Based Estimation, Process Based Estimation, Cost Planning & Cost Estimation
Estimation Models: COCOMO Model, The S/W Equation, System (Pages 33-63);
Analysis: Principles of Structured Analysis, Requirement Analysis, DFD, Unit 3: Systems Analysis (Pages 65-111);
Entity Relationship Diagram, Data Dictionary.
Section-III
S/W Design: Objectives, Principles, Concepts, Design Mythologies: Data Unit 4: Software Design
Design, Architecture Design, Procedural Design, Object – Oriented (Pages 113-155)
Concepts, Testing Fundamentals: Objectives, Principles, Testability, Test Unit 5: Software Testing
Cases: White Box & Black Box Testing, Testing Strategies: Verification (Pages 157-206)
& Validation, Unit Test, Integration Testing, Validation Testing, System
Testing.
CONTENTS
INTRODUCTION 1
Self-Instructional Material 1
Software Process
UNIT 1 SOFTWARE PROCESS and Process Models
1.0 INTRODUCTION
Software systems have become ubiquitous. These systems are now virtually present in all
electronic and electric equipments. Be it electronic gizmos and gadgets, traffic lights, medical
equipments—almost all electrical equipments are run by software. Software is an intangible
entity that embodies instructions and programs, which drives the actual functioning of a
computer system.
In the early days of computers, the computer memory was small, its language consisted of
binary and machine code, and programmers used to develop code that could be used in
developing more than one software system. Thus, the developed software was simple in nature
and did not involve much creativity from the developers’ end. However, as technology improved,
there was a need to build bigger and complex software systems in order to meet the users’
changing and growing requirements. This led to emergence of software engineering which
included development of software processes and various process models. Software process
helps in developing a timely, high-quality, and highly efficient product or system. It consists
of activities, constraints, and resources that are used to produce an intended system. Software
process helps to maintain a level of consistency and quality in products or services that are
produced by many different people.
In this chapter we focus on what is software, how software engineering evolved, why
process models are used, and why software metrics and measurement are used.
Self-Instructional Material 3
Software Engineering
1.1 UNIT OBJECTIVES
After reading this unit, the reader will understand:
NOTES • History of software development.
• Software characteristics and classification of software.
• Various software myths, such as management myths, user myths, and developer myths.
• Software crisis, which has been used since the early days of software engineering to
describe the impact of the rapid increases in computer power and its complexity.
• What is software engineering?
• The role of software engineer.
• Phased development of software, which is often referred to as software development
life cycle.
• What is software process, project, and product?
• The major components of software process, which helps in developing a product that
accomplishes user requirements.
• How process framework determines the processes that are essential for completing a
complex software project.
• The need for process assessment to ensure that it meets a set of basic process criteria.
• Various process models, which comprises of processes, methods, steps for developing
software.
• The role of software metrics and measurement.
1.2 SOFTWARE
Software can be defined as a collection of programs, documentation and operating
procedures. Institute of Electrical and Electronic Engineers (IEEE) defines software as “a
collection of computer programs, procedures, rules, and associated documentation and
data”. Software possesses no mass, no volume, and no colour, which makes it a non-
degradable entity over a long period. Software does not wear out or get tired. According to
the definition of IEEE, software is not just programs, but includes all the associated
documentation and data.
Software is responsible for managing, controlling, and integrating the hardware components
of a computer system and to accomplish any given specific task. Software instructs the
computer about what to do and how to do it. For example, software instructs the hardware
how to print a document, take input from the user, and display the output.
Computers need instructions to carry out the intended task. These instructions are given in
the form of computer programs. Computer programs are written in computer programming
languages, such as C, C++, and so on. A set of programs, which is specifically written to
provide users a precise functionality like solving specific problem is termed as software
package. For example, an accounting software package helps users in performing accounting
related activities.
1960s Software was developed for specific areas and was being marketed and sold separately
from hardware. This marked a deviation from earlier practices of giving software free as a
part of the hardware platform. In addition, hiding of internal details of an operating system
using abstract programming interfaces improved the productivity of the programmer.
1970s With the development of structured design, software development models were introduced.
These were based on a more organic, evolutionary approach, deviating from the waterfall-
based methodologies of hardware engineering. Research was done on quantitative techniques
for software design. During this time, researchers began to focus on software design to
address the problems of developing complex software systems.
1980s Software engineering research shifted focus toward integrating designs and design processes
into the larger context of software development process and management. In the latter half
of the 1980s a new design paradigm known as object-oriented modelling was introduced.
Software engineers using the OOPs technique were able to model both the problems
domain and solution domain within the programming languages.
1990s Object orientation was augmented with design techniques, such as class/responsibilities/
collaborators (CRC) cards and use case analysis. Methods and modelling notations from
the structured design made their way into the object-oriented modelling methods. This
included diagramming techniques, such as state transition diagrams and processing models.
Presently A multi viewed approach to design is used to manage the complexity of designing and
developing large-scale software systems. This multi view approach has resulted in the
development of the unified modelling language (UML), which integrates modelling concepts
and notations from many methodologies.
FTWARE
SO
CS
HA
C
ity
TI
Re
bil
RA C
TERIS
lia
ta
bi
it y
or
P
l
Maintainability
Self-Instructional Material 5
Software Engineering • Functionality: Refers to the degree of performance of the software against its intended
purpose.
z It should provide some pre-defined interfaces and all interactions must take through
these interfaces.
z It should have a complete documentation so that the users of the component can NOTES
decide whether or not the component is meeting their needs.
z It has to conform to some specified standards.
z It should be language-independent.
(a) Stand-alone
(b) Embedded
(c) Real-time
(d) Network
in g
Self-Instructional Material 11
Software Engineering
Problem
Programming Solving
Skills Skills
NOTES
Model of the
Design Application
Approaches
Project
Software Management
Technologies
Formulate the problem by Understand the load of the bridge it Preliminary investigation.
understanding the nature and must carry, the approximate locations
general requirements of the where it can be built, the height
problem. requirements, and so on.
Defining the problem Specify the site for the bridge, its size, Software requirement
precisely. and a general outline of the type of analysis and specifications.
bridge to be built.
Detailing the solution to the Determine exact configuration, size of Software design.
problem. the cables and beams, and developing
blueprints for the bridge.
Implementing. Correspond to actual building of the Software coding.
bridge.
Characteristics Description
Understandability The extent to which the process is explicitly defined and the ease with which the
process definition is understood.
Visibility Whether the process activities culminate in clear results or not so that the progress
of the process is visible externally.
Supportability The extent to which CASE tools can support the process activities.
Acceptability The extent to which defined process is acceptable and usable by the engineers
responsible for producing the software product.
Reliability The manner in which the process is designed so that errors in the process are
avoided or trapped before they result in errors in the product.
Robustness Whether the process can continue inspite of unexpected problems or not.
Maintainability Whether the process can evolve to reflect the changing organizational requirements
or identify process improvements.
Rapidity The speed with which the complete software can be delivered with given
specifications.
Check Your Progress
A project is defined as a specification essential for developing or maintaining a specific
8. Define software
product. A software project is developed when software processes or activities are executed development life cycle.
for certain specific requirements of the user. Thus, using software process, software 9. Explain different types of
project can be easily developed. The activities in a software project comprises of various constraints that are
tasks for managing resources and developing product. Figure 1.6 shows that a software analyzed during preliminary
project involves people (developers, project manager, end users, and so on) also referred to investigation.
as participants who use software processes to produce a product according to user
requirements. The participants play a major role in the development of the project and they
Self-Instructional Material 15
Software Engineering select the appropriate process for the project. In addition, a project is efficient if it is
developed within the time constraint. The outcome or the result of a software project is
known as product. Thus, a software project uses software processes to produce a product.
NOTES
Software process can consist of many software projects and each of them can produce
one or more software products. The interrelationship between these three entities (process,
project, and product) is shown in Figure 1.7. A software project begins with requirements
and ends with the accomplishment of requirements. Thus, software process should be
performed to develop final software by accomplishing the user requirements. Note that
software processes are not specific to the software project.
Software Process
• Project management process: Provides means to plan, organise and control the allocated
resources to accomplish project costs, time and performance objectives. For this, various
processes, techniques and tools are used to achieve the objectives of the project. Project NOTES
management performs the activities of this process. Also, project management process
is concerned with the set of activities or tasks, which are used to successfully accomplish
a set of goals.
• Configuration control process: Manages changes that occur as a result of modifying
the requirements. In addition, it maintains integrity of the products with the changed
requirements. The activities in configuration control process are performed by a group
called configuration control board (CCB).
Software Process
Process Product
Management Engineering
Process Process
Project
Development Configuration
Management
Process Control Process
Process
Note that project management process and configuration control process depend on the
development process. The management process aims to control the development process,
depending on the activities in the development process.
Self-Instructional Material 17
Software Engineering
Software Process
Process Framework
Umbrella Activities
NOTES
Framework Activity # 1
Framework Activity # n
Verification is the process of evaluating a system or its component for determining the
product developed at each phase of software development. IEEE defines verification as “a
process for determining whether the software products of an activity fulfil the requirements
or conditions imposed on them in the previous activities.” Thus, it confirms that the product
is transformed from one form to another as intended and with sufficient accuracy.
Validation is the process of evaluating the product at the end of each phase to ensure
compliance with the requirements. In addition, it is the process of establishing a procedure
and a method, which performs according to the intended outputs. IEEE defines validation
as “a process for determining whether the requirements and the final, as-built system or
software product fulfils its specific intended use.” Thus, validation substantiates the software
functions with sufficient accuracy with respect to its requirement specifications.
Various kinds of process models used are waterfall model, prototyping model, spiral model,
and fourth generation techniques.
Check Your Progress
10. What is software project
1.6.1 Waterfall Model management?
In waterfall model (also known as classical life cycle model) the development of software 11. What is the aim of process
proceeds linearly and sequentially from requirement analysis to design, coding, testing, management processes
integration, implementation, and maintenance. Thus, this model is also known as linear (PMP)?
sequential model. 12. Define process
framework.
This model is simple to understand, and represents processes, which are easy to manage
and measure. Waterfall model comprises of different phases and each phase has its distinct
goal. Figure 1.11 shows that once a phase is completed, the development of software Self-Instructional Material 19
Software Engineering proceeds to the next phase. Each phase modifies the intermediate product to develop a new
product as an output. The new product becomes the input of the next process as listed in
Table 1.4.
NOTES
Changed
Requirements
System/
Information
Engineering
Modelling
Requirements
Requirements Analysis
Engineering
Design
Design
Product Coding
Programming
Process
Maintenance
Iteration Testing
Integration
Maintenance
Process Implementation
and
Delivery Maintenance
Product input Product output
As stated earlier, waterfall model comprises of several phases. These phases are listed
below:
• System/information engineering modelling: Establishes the requirements for the
system known as computer based system. Hence, it is essential to establish the
requirement of that system. A subset of requirements is allocated to the software. The
system view is essential when the software interacts with the hardware. System
engineering includes collecting requirements at the system level. The information gathering
is necessary when the requirements are collected at a level where all decisions regarding
business strategies are taken.
• Requirement analysis: Focuses on the requirements of the software which is to be
developed. It determines the processes that are incorporated during the development of
software. To specify the requirements’ users specification should be clearly understood
and the requirements should be analysed. This phase involves interaction between user
and software engineer, and produces a document known as software requirement
specification (SRS).
• Design: Determines the detailed process of developing software after the requirements
20 Self-Instructional Material are analysed. It utilises software requirements defined by the user and translates them
into a software representation. In this phase, the emphasis is on finding a solution to Software Process
the problems defined in the requirement analysis phase. The software engineer, in this and Process Models
phase is mainly concerned with the data structure, algorithmic detail, and interface
representations.
• Coding: Emphasises on translation of design into a programming language using the NOTES
coding style and guidelines. The programs created should be easy to read and understand.
All the programs written are documented according to the specification.
• Testing: Ensures that the product is developed according to the requirements of the
user. Testing is performed to verify that the product is functioning efficiently with
minimum errors. It focuses on the internal logics and external functions of the software
and ensures that all the statements have been exercised (tested). Note that testing is a
multi-stage activity, which emphasises verification and validation of the product.
• Implementation and maintenance: Delivers fully functioning operational software
to the user. Once the software is accepted and deployed at the user’s end, various
changes occur due to changes in external environment (these include upgrading new
operating system or addition of a new peripheral device). The changes also occur due
to changing requirements of the user and the changes occurring in the field of
technology. This phase focuses on modifying software, correcting errors, and improving
the performance of the software.
The various advantages and disadvantages associated with waterfall model are listed in
Table 1.5.
Advantages Disadvantages
Prototyping
Start
Stop
Requirements
Gathering &
Analysis
Engineer Quick
Product Design
Refining Build
Prototype Prototype
User
Evaluation
Provides a working model to the user early in the If the user is not satisfied by the developed
process, enabling early assessment and increasing prototype, then a new prototype is developed. NOTES
user confidence. This process goes on until a perfect prototype
Developer gains experience and insight by is developed. Thus, this model is time
developing a prototype, thereby resulting in better consuming and expensive.
implementation of requirements. Developer looses focus of the real purpose of
Prototyping model serves to clarify requirements, prototype and compromise with the quality
which are not clear, hence reducing ambiguity and of the product. For example, they apply some
improving communication between developer and of the inefficient algorithms or inappropriate
user. programming languages used in developing the
prototype.
There is a great involvement of users in software
development. Hence, the requirements of the Prototyping can lead to false expectations. It
users are met to the greatest extent. often creates a situation where user believes
that the development of the system is finished
Helps in reducing risks associated with the project. when it is not.
The primary goal of prototyping is rapid
development, thus, the design of system can
suffer as it is built in a series of layers without
considering integration of all the other
components.
Cumulative Cost
Progress
through
steps
Evaluate alternatives,
identify, resolve risks
Determine
objectives, Risk
alternatives, analysis
constraints Risk
analysis
Risk
analysis
Risk
analysis Prototype3 Operational
Commitment Prototype1 Prototype2 prototype
Review
partition Simulations, models, benchmarks
Requirements plan
Life-cycle plan Concept of
operation Software
requirements Software
product Detailed
design design
Requirements
Development validation
plan Code
Integration Unit
Design validation test
and test and verification
plan Integration
Plan next phases Acceptance and test
Implemen- test
tation
Develop, verify
next-level product
Detailed Design
Design
Coding
Code Programming Develop, Verify
next-level Product
Unit
Testing
Test Integration
Implementation
and
Delivery Maintenance
Integration
and Test
Acceptance
Test
Implemen-
tation
24 Self-Instructional Material
Figure 1.14 Spiral and Waterfall Model
The spiral model is similar to the waterfall model, as software requirements are understood Software Process
at the early stages in both the models. However, the major risks involved with developing and Process Models
the final software are resolved in spiral model. When these issues are resolved, a detail
design of the software is developed. Note that processes in the waterfall model are followed
by different cycles in the spiral model as shown in Figure 1.14. NOTES
The various advantages and disadvantages associated with spiral model are listed in
Table 1.7.
Spiral model is also similar to prototyping process model. As one of the key features of
prototyping is to develop a prototype until the user requirements are accomplished. The
second step of the spiral model functions similarly. The prototype is developed to clearly
understand and achieve user requirements. If the user is not satisfied with the prototype, a
new prototype known as operational prototype is developed.
Advantages Disadvantages
Avoids the problems resulting in risk-driven Assessment of project risks and its resolution
approach in the software. is not an easy task.
Specifies a mechanism for software quality Difficult to estimate budget and schedule in
assurance activities. the beginning, as some of the analysis is not
Spiral model is utilised by complex and dynamic done until the design of the software is
projects. developed.
Re-evaluation after each step allows changes in
user perspectives, technology advances or financial
perspectives.
Estimation of budget and schedule gets realistic as
the work progresses.
Self-Instructional Material 25
Software Engineering Table 1.8 Advantages and disadvantages of Fourth generation Techniques
Advantages Disadvantages
Development time is reduced, when used for small and Difficult to use.
NOTES intermediate applications.
The interaction between user and developer helps in Limited only to small business
detection of errors. information systems.
When integrated with CASE tools and code generators,
fourth generation techniques provide a solution to most
of the software engineering problems.
26 Self-Instructional Material
1.7.2 Software Metrics Software Process
and Process Models
Once measures are collected they are converted into metrics for use. IEEE defines metric
as “a quantitative measure of the degree to which a system, component, or process possesses
a given attribute.” The goal of software metrics is to identify and control essential parameters
that affect software development. The other objectives of using software metrics are listed NOTES
below:
• Measure the size of the software quantitatively.
• Assess the level of complexity involved.
• Assess the strength of the module by measuring coupling.
• Assess the testing techniques.
• Specify when to stop testing.
• Determine the date of release of the software.
• Estimate cost of resources and project schedule.
Note that to achieve these objectives, software metrics are applied to different projects for
a long period of time to obtain indicators. Software metrics help project managers to gain
an insight into the efficacy of the software process, project, and product. This is possible
by collecting quality and productivity data and then analysing and comparing these data
with past averages in order to know whether quality improvements have occurred or not.
Also, when metrics are applied in a consistent manner, it helps in project planning and
project management activity. For example, schedule based resource allocation can be
effectively enhanced with the help of metrics.
Self-Instructional Material 31
Software Project Planning
UNIT 2 SOFTWARE PROJECT PLANNING and Cost Estimation
2.0 INTRODUCTION
Software development is a complex activity involving people, processes and procedures.
Therefore, an effective management of software project is essential for its success. Software
project management (responsible for project planning) specifies activities necessary to
complete the project. The activities include determining project constraints, checking project
feasibility, defining role and responsibilities of the persons involved in the project, and
much more. One of the crucial aspects of project planning is the estimation of costs, which
includes work to be done, resources, and time required to develop the project. A careful
and accurate estimation of cost is important, as cost overrun may agitate the
customers and lead to cancellation of the project, while, cost underestimate may force a
software team to invest its time without much monetary consideration.
Cost estimation should be done before software development is initiated since it helps the
project manager to know about resources required and the feasibility of the project. Also,
the initial estimate may be used to establish budget for the project or to set a price for the
software to the potential customer. However, estimate must be done repeatedly throughout
the development process, as more information about the project is available in the later
stages of development. This helps in effective usage of resources and time. For example, if
actual expenditure is greater than the estimate, then the project manager may apply additional
resources for the project or modify the work to be carried out.
Approves the project, employ personnel, Reviews the project plan and implements procedures
and provides resources required for the for completing the project.
project. Manages all project activities.
Reviews project plan to ensure that it Prepares budget and resource allocation plans.
accomplishes business objectives. Helps in resource distribution, project management,
Resolves conflicts among team members. issue resolution, and so on.
Understands project objectives and finds ways to
Considers risks that may affect the project
accomplish the objectives.
so that appropriate measures can be taken to
avoid them. Devotes appropriate time and effort to achieve the
expected results.
Selects methods and tools for the project.
Note: In software project, tasks and activities represent the tasks performed during software development.
Hence, both the terms are used interchangeably throughout this chapter.
34 Self-Instructional Material
Project planning comprises of project purpose, project scope, project planning process, and Software Project Planning
project plan. This information is essential for effective project planning and to assist project and Cost Estimation
management team in accomplishing user requirements.
Identification of
Project Requirements
Identification of Identification of
Cost Estimates Risks
Identification of
Critical Success Factors
Preparation of Preparation of
Project Charter Project Plan
Commencement of
Software Project
Figure 2.1 shows several activities of project planning, which can be performed both in a
sequence and in a parallel manner. Project planning process consists of various activities
listed below:
• Identification of project requirements: Before starting a project, it is essential to
identify the project requirements as the identification of project requirements helps in
performing the activities in a systematic manner. These requirements comprise of
information, such as project scope, data and functionality required in the software, and
roles of the project management team members.
• Identification of cost estimates: Along with the estimation of effort and time, it is
necessary to estimate the cost that is to be incurred on a project. The cost estimation
includes the cost of hardware, network connections, and the cost required for the
maintenance of hardware components. In addition, cost is estimated for the individuals
involved in the project.
• Identification of risks: Risks are unexpected events that have adverse effect on the
project. Software project involves several risks (like technical risks and business risks)
that affect the project schedule and increase the cost of the project. Identifying risks
before a project begins helps in understanding their probable extent of impact on the
36 Self-Instructional Material project.
• Identification of critical success factors: For making a project successful, critical Software Project Planning
success factors are followed. Critical success factors refer to the conditions that ensure and Cost Estimation
greater chances of success of a project. Generally, these factors include support from
management, appropriate budget, appropriate schedule, and skilled software engineers.
• Preparation of project charter: A project charter provides brief description of the NOTES
project scope, quality, time, cost, and resource constraints as described during project
planning. It is prepared by the management for approval from the sponsor of the project.
• Preparation of project plan: A project plan provides information about the resources
that are available for the project, individuals involved in the project, and the schedule
according to which the project is to be carried out.
• Commencement of the project: Once the project planning is complete and resources
are assigned to team members, the software project commences.
Figure 2.1 shows the process of project planning. Once the project objectives and business
objectives are determined, the project end date is fixed. Project management team prepares
the project plan and schedule according to the end date of the project. After analysing the
project plan, the project manager communicates the project plan and end date to the senior
management. The progress of the project is reported to the management from time to time.
Similarly, when the project is complete, senior management is informed about it. In case,
there is delay in completing the project, the project plan is re-analyzed and corrective
actions are taken to complete the project. The project is tracked regularly and when the
project plan is modified, the senior management is informed.
Figure 2.2 shows the verification and validation plan, which comprises of various sections
listed below:
• General information: Provides description of the purpose, scope, system overview,
and project references. Purpose describes the procedure to verify and validate the
components of the system. Scope provides information about the procedures to verify
and validate as they relate to the project. System overview provides information about
the organization responsible for the project and other information, such as system name,
system category, operational status of the system, and system environment. Project
references provide the list of references used for the preparation of the verification and
38 Self-Instructional Material validation plan. In addition, this section includes acronyms and abbreviations and points
of contact. Acronyms and abbreviations provide a list of terms used in the document. Software Project Planning
Points of contact provide information to users when they require assistance from and Cost Estimation
organization for problems, such as troubleshooting and so on.
• Reviews and walkthroughs: Provide information about the schedule and procedures.
Schedule describes the end date of milestones of the project. Procedures describe the NOTES
tasks associated with reviews and walkthroughs. Each team member reviews the
document for errors and consistency with the project requirements. For walkthroughs,
the project management team checks the project for correctness according to software
requirements specification (SRS).
• System test plan and procedures: Provide information about the system test strategy,
database integration, and platform system integration. System test strategy provides
overview of the components required for integration of the database and ensures that
the application runs on at least two specific platforms. Database integration procedure
describes how database is connected to the graphical user interface (GUI). Platform
system integration procedure is performed on different operating systems to test the
platform.
• Acceptance test and preparation for delivery: Provide information about procedure,
acceptance criteria, and installation procedure. Procedure describes how acceptance
testing is to be performed on the software to verify its usability as required. Acceptance
criteria describes that software will be accepted only if all the components, features,
and functions are tested including the system integration testing. In addition, acceptance
criteria checks whether the software accomplishes user expectations, such as its ability
to operate on several platforms. Installation procedure describes the steps on how to
install the software according to the operating system being used for it.
(c) Configuration Management: The configuration management plan defines the process,
which is used for making changes to the project scope. Generally, configuration management
plan is concerned with redefining the existing objectives of the project and deliverables
(software products that are delivered to the user after completion of a software development
phase).
(d) Maintenance: The maintenance plan specifies the resources and processes required
for making the software operational after its installation. Sometimes, the project management
team (or software development team) does not carry out the task of maintenance once the
software is delivered to the user. In such a case, a separate team known as software
maintenance team performs the task of software maintenance. Before carrying out
maintenance, it is necessary for users to have information about the process required for
using the software efficiently.
Figure 2.3 shows the maintenance plan, which comprises of various sections listed below:
40 Self-Instructional Material
Software Project Planning
1.0 General Information
and Cost Estimation
1.1 Project Name
1.2 Project Manager
1.3 Project Start Date
1.4 Project End Date NOTES
2.0 Skills Assessment
3.0 Staffing Profile
3.1 Calendar Time
3.2 Individuals Involved
3.3 Level of commitment
4.0 Organization Chart
• Defines roles and responsibilities of the project management team members so that they
can communicate and coordinate with each other according to the tasks assigned to
them. Note that the project management team can be further broken down into sub-
team depending on the size and complexity of the project.
Figure 2.4 shows staffing plan which comprises of various sections listed below:
• General information: Provides information, such as name of the project and project
manager who is responsible for the project. In addition, it specifies the start and end
dates of the project.
• Skills assessment: Provides information, which is required for assessment of skills.
This information includes the knowledge, skill, and ability of team members, who are
required to achieve the objectives of the project. In addition, it specifies the number of
team members required for the project.
• Staffing profile: Describes the profile of the staff required for the project. The profile
includes calendar time, individuals involved, and level of commitment. Calendar time
specifies the period of time, such as month or quarter required to complete the project.
Individuals that are involved in the project have specific designations, such as project
manager and the developer. Level of commitment is the utilisation rate of individuals,
such as work performed on full time and part time basis.
• Organization chart: Describes the organization of project management team members.
In addition, it includes information, such as name, designation, and role of each team
member.
4x
NOTES
2x
1.5x
1.25x
.8x
.57x
.5x
.25x
In Figure 2.5, the funnel shaped lines narrowing at the right hand side show how cost
estimates get more accurate as additional software information is available. For example,
cost estimated during system design phase is more accurate than cost estimated during
requirement phase. Similarly, cost estimated during coding and testing phase is more accurate
than it is at design phase. Note that when all the information about project is not known, the
initial estimate may differ from the final estimate by factor of four.
Note: Cost estimation should be done more diligently throughout the life cycle of the
project so that unforeseen delays and risks can be avoided in the future.
w Componen
Ne ts
Fu
OT
CoEmxperientcse
ll-
S Componen
Reusable
ponen
Software
ts
Pa
r Ex ience
Cot-m per
ponents
Tools
are
n tw
o
f
So
ati
Sk
Project
Loc
ills
Hardware
Human
Environment
Resources
Net
wo
k
r
Re
Number sour
ces
Estimate Size
Estimate Schedule
Risk Assessment
Inspect/Approve
Track Estimates
(a) Project Objectives and Requirements: In this phase, the objectives and requirements
for the project are identified, which is necessary to estimate cost accurately and accomplish
user requirements. The project objective defines the end product, intermediate steps involved
in delivering the end product, end date of the project, and individuals involved in the project.
This phase also defines the constraints/limitations that affect the project in meeting its
objectives. Constraints may arise due to the factors listed below:
• Start date and completion date of the project.
• Availability and use of appropriate resources.
• Policies and procedures that require explanations regarding their implementation.
Project cost can be accurately estimated once all the requirements are known. However, if
all requirements are not known, then the cost estimate is based only on the known
requirements. For example, if software is developed according to the incremental development
model, then the cost estimation is based on the requirements that have been defined for that
46 Self-Instructional Material increment.
(b) Plan Activities: Software development project involves different set of activities, which Software Project Planning
and Cost Estimation
helps in developing software according to the user requirements. These activities are
performed in fields of software maintenance, software project management, software quality
assurance, and software configuration management. These activities are arranged in the
work breakdown structure according to their importance. NOTES
Work breakdown structure (WBS) is the process of dividing the project into tasks and
ordering them according to the specified sequence. WBS specifies only the tasks that are
performed and not the process by which these tasks are to be completed. This is because
WBS is based on requirements and not the manner in which these tasks are carried out.
(c) Estimating Size: Once the WBS is established, product size is calculated by estimating
the size of its components. Estimating product size is an important step in cost estimation
as most of the cost estimation models usually consider size as the major input factor. Also,
project managers consider product size as a major technical performance indicator or
productivity indicator, which allows them to track a project during software development.
(d) Estimating Cost and Effort: Once the size of the project is known, cost is calculated
by estimating effort, which is expressed in terms of person-month (PM). Various models
(like COCOMO, COCOMO II, expert judgement, top-down, bottom-up, estimation by
analogy, Parkinson’s principal, and price to win) are used to estimate effort. Note that for
cost estimation, more than one model is used, so that cost estimated by one model can be
verified by another model.
(e) Estimating Schedule : Schedule determines the start date and end date of the project.
Schedule estimate is developed either manually or with the help of automated tools. To
develop a schedule estimate manually, a number of steps are followed, which are listed
below:
1. The work breakdown structure is expanded, so that the order in which functional
elements are developed can be determined. This order helps in defining the functions,
which can be developed simultaneously.
2. A schedule for development is derived for each set of functions that can be developed
independently.
3. The schedule for each set of independent functions is derived as the average of the
estimated time required for each phase of software development.
4. The total project schedule estimate is the average of the product development, which
includes documentation and various reviews.
Manual methods are based on past experience of software engineers. One or more software
engineers, who are experts in developing application, develop an estimate for schedule.
However, automated tools (like COSTAR, COOLSOFT) allow the user to customise schedule
in order to observe the impact on cost.
( f ) Risk Assessment : Risks are involved in every phase of software development therefore,
risks involved in a software project should be defined and analysed, and the impact of risks
on the project costs should also be determined. Ignoring risks can lead to adverse effects,
such as increased costs in the later stages of software development. In the cost estimation
process, four risk areas are considered, which are listed in Table 2.3.
Self-Instructional Material 47
Software Engineering Table 2.3 Risk Resulting from Poor Software Estimates
Size of the software project Software developers are always optimistic while estimating the size
NOTES of the software. This often results in underestimation of software
size, which in turn can lead to cost and schedule overruns.
Staff skills Misalignment of skills to tasks can result in inaccurate cost and
schedule estimates. This can also result in poor estimates of project
staffing requirements.
Change in requirements Requirements of a software project can change during any phase of
software development. However, unconstrained change of
requirements results in changing project goals that can result in
customer dissatisfaction, and cost and schedule overruns.
(g) Inspect and Approve: The objective of this phase is to inspect and approve estimates in
order to improve the quality of an estimate and get an approval from top-level management.
The other objectives of this step are listed below:
• Confirm the software architecture and functional WBS.
• Verify the methods used for deriving the size, schedule, and cost estimates.
• Ensure that the assumptions and input data used to develop the estimates are correct.
• Ensure that the estimate is reasonable and accurate for the given input data.
• Confirm and record the official estimates for the project.
Once the inspection is complete and all defects have been removed, project manager,
quality assurance group, and top-level management sign the estimate. Inspection and approval
activities can be formal or informal as required but should be reviewed independently by
the people involved in cost estimation.
(h) Track Estimates: Tracking estimate over a period of time is essential, as it helps in
comparing the current estimate to previous estimates, resolving any discrepancies with
previous estimates, comparing planned cost estimates and actual estimates. This helps in
keeping track of the changes in a software project over a period of time. Tracking also
allows the development of a historical database of estimates, which can be used to adjust
various cost models or to compare past estimates to future estimates.
(i) Process Measurement and Improvement: Metrics should be collected (in each step) to
improve the cost estimation process. For this, two types of process metrics are used
namely, process effective metrics and process cost metrics. The benefit of collecting these
metrics is to specify a reciprocal relation that exists between the accuracy of the estimates
Check Your Progress and the cost of developing the estimates.
10. Why is software cost
• Process effective metrics: Keeps track of the effects of cost estimating process. The
estimation process
needed? objective is to identify elements of the estimation process, which enhance the estimation
11. Explain the significance of
process. These metrics also identify those elements which are of little or no use to the
estimating size of a planning and tracking processes of a project. The elements that do not enhance the
software project. accuracy of estimates should be isolated and eliminated.
12. Explain the importance of • Process cost metrics: Provides information about implementation and performance
tracking estimates. cost incurred in the estimation process. The objective is to quantify and identify different
ways to increase the cost effectiveness of the process. In these metrics, activities that
cost-effectively enhance the project planning and tracking process remain intact, while
48 Self-Instructional Material activities that have negligible affect on the project are eliminated.
Software Project Planning
2.6 DECOMPOSITION TECHNIQUES and Cost Estimation
Software cost estimation is a form of problem solving and in most cases, the problems to
be solved are too complex to be considered in a single form. Therefore, the problem is
decomposed into components in order to achieve an accurate cost estimate. Two approaches NOTES
are mainly used for decomposition namely, problem-based estimation and process-based
estimation. However, before estimating cost, project planner should establish an estimate
of the software size, which is referred to as quantitative outcome of the software project.
Software Sizing: Before estimating cost, it is necessary to estimate the accurate size of
software. This is a cumbersome task as many software are of large size. Therefore, software
is divided into smaller components to estimate size. This is because it is easier to calculate size
of smaller components, as the complexity involved in them is less than the larger components.
These small components are then added to get an overall estimate of software size.
Various approaches can be followed for estimating size. These include direct and indirect
approaches. In direct approach, size can be measured in terms of lines of code (LOC) and
in an indirect approach, size can be measured in terms of functional point (FP). Note that
the accuracy of size estimates depends on many parameters, which are listed below:
• The degree to which the size of the software has been properly estimated.
• The ability to convert size estimate into human effort, calendar time and money.
• The degree to which the ability of a software team is reflected by the software plan.
• The stability of product requirements and environment that supports the development
process.
It has been observed that an estimate of the project’s cost is as good as the estimate of its
size. In estimating cost, size is considered as the first problem faced by the project planner.
This problem is commonly known as software-sizing problem. In order to solve this
problem, various approaches are followed, which are listed below:
• Fuzzy logic sizing: To implement this approach, the planner must identify the application
type and its magnitude on a quantitative scale. The magnitude is then refined within the
original range.
• Function point sizing: This approach is used for measuring functionality delivered by
the software system. Function points are derived with the help of empirical relationship,
which is based on countable measures of software information domain and assessment
of software complexity.
• Standard component sizing: Generally, software comprises of a number of standard
components, which are common to a particular application only. Standard components
can be modules, screens, reports, lines of code, and so on. In cost estimation process,
the number of occurrence of each component is estimated and then the historical data
of the project is used to determine the delivered size of each standard component.
• Change sizing: When an already existing project is modified in order to use it in the
new project, this approach is followed. The number and type of modifications that
should be accomplished in the existing project are estimated.
Note: It is easier to perform size estimation than cost estimation because components costs
cannot be added together (since other costs, such as integration costs are also involved
while developing a system). Therefore, size is used as a key parameter by estimation models.
EO × 4 15 17 =
EQ × 3 14 17 =
ILF × 7 10 15 =
EIF × 5 17 10 =
Example of Function Point: To estimate size in terms of function point, first FP count
should be determined, which is calculated by the following equation:
FP count = Count × Weighting factor (Average) ...(3)
For instance, in Table 2.6 FP count for number of user inputs (measurement parameter)
is calculated as follows:
FP count = 22 × 4 = 88
After determining count for each parameter and calculating count total, 14 other parameters
are considered, which are listed in Table 2.7.
52 Self-Instructional Material
Table 2.7 Value Adjustment Factors Software Project Planning
and Cost Estimation
Factors Value
Backup and recovery 3
Data communications 1 NOTES
Distributed processing 1
Performance critical 3
Existing operating environment 2
On-line data entry 3
Input transactions over multiple screens 4
Master files updated on-line 2
Information domain value complex 4
Internal processing complex 4
Code design for reuse 3
Conversions/installation in design 2
Multiple installations 4
Application design for change 4
Value adjustment factor 40
Self-Instructional Material 53
Software Engineering Table 2.8 Process based Estimation
The average labour rate available for this example is Rs 50000 per month, and based on
Table 2.8 estimated effort is 36 person-month. Considering these two factors, the total
estimated project cost is Rs 1800000. Note that if required, labour rate can be linked with
each framework activity or software engineering activity and the labour rate can be computed
independently.
Advantages Disadvantages
Easy to verify the working involved in it. Difficult to accurately estimate size, in the
early phases of the project.
Cost drivers are useful in effort estimation as they Vulnerable to misclassification of the project
help in understanding impact of different type.
parameters involved in cost estimation.
Efficient and good for sensitivity analysis. Success depends on calibration of the model
according to the needs of the organization.
This is done using historic data, which is not
always available.
Can be easily adjusted according to the Excludes overhead cost, travel cost and other
organization needs and environment. incidental cost.
Constructive cost model is based on the hierarchy of three models, namely, basic model,
intermediate model, and advance model.
(a) Basic Model: In basic model, only the size of project is considered while calculating
effort. To calculate effort, use the following equation (known as effort equation):
E = A × (size)B ...(5)
where E is the effort in person-months and size is measured in terms of KDLOC. The
values of constants ‘A’ and ‘B’ depend on the type of the software project. In this model,
values of constants (‘A’ and ‘B’) for three different types of projects are listed in Table
2.10.
Project Type A B
Project Type A B
Using the equation (6) and the value of constant for organic project, initial effort can be
calculated as follows:
Ei = 3.2 × (45)1.05 = 174 PM
1. Fifteen parameters are identified. These parameters are called cost driver attributes,
which are rated as very low, low, nominal, high, very high or extremely high. For
example, in Table 2.12, reliability of a project can be rated according to this rating
scale. In the same Table, the corresponding multiplying factors for reliability are 0.75,
0.88, 1.00, 1.15 and 1.40.
56 Self-Instructional Material
Table 2.12 Effort Multipliers for Cost Drivers Software Project Planning
and Cost Estimation
Cost Description Very Rating Very Extra
Drivers Low Low Nominal High High High
RELY Required software reliability 0.75 0.88 1.00 1.15 1.40 –
NOTES
DATA Database size – 0.94 1.00 1.08 1.16 –
CPLX Product complexity 0.70 0.85 1.00 1.15 1.30 1.65
TIME Execution time constraint – – 1.00 1.11 1.30 1.66
STOR Main storage constraint – – 1.00 1.06 1.21 1.56
VIRT Virtual machine volatility – 0.87 1.00 1.15 1.30 –
TURN Computer turnaround time – 0.87 1.00 1.07 1.15 –
ACAP Analyst capability 1.46 1.19 1.00 0.86 0.71 –
AEXP Applications experience 1.29 1.13 1.00 0.91 0.82 –
PCAP Programmer capability 1.42 1.17 1.00 0.86 0.70 –
VEXP Virtual machine experience 1.21 1.10 1.00 0.90 – –
LEXP Language experience 1.14 1.07 1.00 0.95 – –
MODP Modern programming practices 1.24 1.10 1.00 0.91 0.82 –
TOOL Software Tools 1.24 1.10 1.00 0.91 0.83 –
SCED Development Schedule 1.23 1.08 1.00 1.04 1.10
Next, the multiplying factors of all cost drivers considered for the project are multiplied with
each other to obtain EAF. For instance, using cost drivers listed in Table 2.13, EAF is calculated
as:
0.8895 (1.15×0.85×0.91×1.00).
3. Once EAF is calculated, the effort estimates for a software project is obtained by
multiplying EAF with initial estimate (Ei). To calculate effort use the following equation:
Total effort = EAF × Ei
For this example, the total effort will be 155 PM.
(c) Advance Model: In advance model, effort is calculated as a function of program size
and a set of cost drivers for each phase of software engineering. This model incorporates
all characteristics of the intermediate model and provides procedure for adjusting the phase
wise distribution of the development schedule.
There are four phases in advance COCOMO model namely, requirements planning and
product design (RPD), detailed design (DD), code and unit test (CUT), and integration and
test (IT). In advance model, each cost driver is rated as very low, low, nominal, high, and
very high. For all these ratings, cost drivers are assigned multiplying factors. Multiplying Self-Instructional Material 57
Software Engineering factors for analyst capability (ACAP) cost driver for each phase of advanced model are
listed in Table 2.14. Note that multiplying factors yield better estimates because the cost
driver ratings differ during each phase.
For example, software project (of organic project type), with a size of 45 KDLOC and
rating of ACAP cost driver as nominal is considered (That is 1.00). To calculate effort for
code and unit test phase in this example, only ACAP cost drivers are considered. Initial
effort can be calculated by using equation (6):
Ei = 3.2 × (45)1.05 = 174 PM
Using the value of Ei, final estimate of effort can be calculated by using the following
equation:
E = Ei × 1
That is, E = 174 × 1 = 174 PM
58 Self-Instructional Material
Software Project Planning
2.8 LET US SUMMARIZE and Cost Estimation
Self-Instructional Material 63
System Analysis
UNIT 3 SYSTEM ANALYSIS
Structure
NOTES
3.0 Introduction
3.1 Unit Objectives
3.2 What is Software Requirement?
3.2.1 Guidelines for Expressing Requirements; 3.2.2 Types of Requirements
3.2.3 Requirements Engineering Process
3.3 Feasibility Study
3.3.1 Types of Feasibility; 3.3.2 Feasibility Study Process
3.4 Requirements Elicitation
3.4.1 Elicitation Techniques
3.5 Requirements Analysis
3.5.1 Structured Analysis; 3.5.2 Object-oriented Modelling; 3.5.3 Other Approaches
3.6 Requirements Specification
3.6.1 Structure of SRS
3.7 Requirements Validation
3.7.1 Requirement Review; 3.7.2 Other Requirement Validation Techniques
3.8 Requirements Management
3.8.1 Requirements Management Process; 3.8.2 Requirements Change Management
3.9 Case Study: Student Admission and Examination System
3.9.1 Problem Statement; 3.9.2 Data Flow Diagrams
3.9.3 Entity Relationship Diagram; 3.9.4 Software Requirements Specification Document
3.10 Data Dictionary
3.11 Let us Summarize
3.12 Answers to ‘Check Your Progress’
3.13 Questions and Exercises
3.14 Further Reading
3.0 INTRODUCTION
In the software development process, requirements phase is the first software engineering
activity, which translates the ideas or views into a requirements document. This phase is
user-dominated phase. Defining and documenting the user requirements in a concise and
unambiguous manner is the first major step to achieve a high quality product.
Requirements phase encompasses a set of tasks, which helps to specify the impact of the
software on the organisation, customers’ needs, and how users will interact with the
developed software. The requirements are the basis of system design. If requirements are
not correct the end product will also contain errors. Note that requirement activity like all
other software engineering activities should be adapted to the needs of the process, the
project, the product, and the people involved in the activity. Also, the requirements should
be specified at different levels of detail. This is because requirements are meant for (such
as users, managers, system engineers, and so on). For example, managers may be interested
in knowing how the system is implemented rather than knowing the detailed features of the
system. Similarly, end-users are interested in knowing whether the specified requirements
meet their desired needs or not.
Delivery
Implementation
Standards
External Organizational
Requirements Requirements
Non-Functional
Interoperability Requirements
Ethical
Legislative Efficiency
Reliability
Portability
Usability
Product
Requirements
Different types of non-functional requirements are shown in Figure 3.2. The description of
these requirements are listed below:
• Product requirements: These requirements specify how software product performs.
Product requirements comprise of the following:
Efficiency requirements: Describe the extent to which software makes optimal
use of resources, the speed with which system executes, and the memory it consumes
for its operation. For example, system should be able to operate at least three times
faster than the existing system.
Reliability requirements: Describe the acceptable failure rate of the software. For
example, software should be able to operate even if a hazard occurs.
Portability requirements: Describe the ease with which software can be transferred
from one platform to another. For example, it should be easy to port software to
different operating system without the need to redesign the entire software.
Usability requirements: Describe the ease with which users are able to operate the
software. For example, software should be able to provide access to functionality
with fewer keystrokes and mouse clicks.
• Organizational requirements: These requirements are derived from the policies and
procedures of an organization. Organizational requirements comprise of the following:
Delivery requirements: Specify when software and its documentation are to be
delivered to the user.
68 Self-Instructional Material
Implementation requirements: Describe requirements, such as programming System Analysis
language and design method.
Standards requirements: Describe the process standards to be used during software
development. For example, software should be developed using standards specified
by ISO (International Organization for Standardization) and IEEE standards. NOTES
• External requirements: These requirements include all the requirements that affect the
software or its development process externally. External requirements comprise of the
following:
Interoperability requirements: Define the way in which different computer-based
systems interact with each other in one or more organizations.
Ethical requirements: Specify the rules and regulations of the software so that
they are acceptable to users.
Legislative requirements: Ensure that software operates within the legal
jurisdiction. For example, pirated software should not be sold.
Non-functional requirements are difficult to verify. Hence, it is essential to write non-
functional requirements quantitatively so that they can be tested. For this, non-functional
requirements metrics are used. These metrics are listed in Table 3.1.
Features Measures
(c) Domain Requirements: Requirements derived from the application domain of a system,
instead from the needs of the users are known as domain requirements. These
requirements may be new functional requirements or specify a method to perform some
particular computations. In addition, these requirements include any constraint that may be
present in existing functional requirements. As domain requirements reflect fundamentals
of the application domain, it is important to understand these requirements. Also, if these
requirements are not fulfilled, it may be difficult to make the system work as desired.
A system can include a number of domain requirements. For example, a system may
comprise of design constraint that describes the user interface, which is capable of accessing
all the databases used in a system. It is important for a development team to create databases
and interface design as per established standards. Similarly, the requirements requested by
the user, such as copyright restrictions and security mechanism for the files and documents
used in the system are also domain requirements. Self-Instructional Material 69
Software Engineering When domain requirements are not expressed clearly, it can result in various problems,
such as:
• Problem of understandability: When domain requirements are specified in the language
of application domain (such as mathematical expressions), it becomes difficult for
NOTES software engineers to understand these requirements.
• Problem of implicitness: When domain experts understand the domain requirements
but do not express these requirements clearly, it may create a problem (due to incomplete
information) for the development team to understand and implement the requirements in
the system.
User
Feasibility Report
Requirements
Management Requirements
Requirements Elicitation
Specification
Requirements
SRS Analysis & Modelling
70 Self-Instructional Material
System Analysis
3.3 FEASIBILITY STUDY
Feasibility is defined as the practical extent to which a project can be performed successfully.
To evaluate feasibility, a feasibility study is performed, which determines whether the
solution considered to accomplish the requirements is practically workable in the software NOTES
or not. For this, it considers information, such as resource availability, cost estimates for
software development, benefits of software to organization after it is developed, and cost
to be incurred on its maintenance. The objective of feasibility study is to establish the
reasons for developing software that is acceptable to users, adaptable to change, and
conformable to established standards. Various other objectives of feasibility study are listed
below:
• Analyze whether the software will meet organizational requirements or not.
• Determine whether the software can be implemented using current technology and
within the specified budget and schedule or not.
• Determine whether the software can be integrated with other existing software or not.
(a) Technical Feasibility: Technical feasibility assesses the current resources (such as
hardware and software) and technology, which are required to accomplish user requirements
in the software within the allocated time and budget. For this, software development team
ascertains whether the current resources and technology can be upgraded or added in the
software to accomplish specified user requirements. Technical feasibility performs the
tasks listed below:
• Analyzes the technical skills and capabilities of software development team members.
• Determines whether the relevant technology is stable and established or not.
• Ascertains that the technology chosen for software development has large number of
users so that they can be consulted when problems arise or improvements are required.
Self-Instructional Material 71
Software Engineering (b) Operational Feasibility: Operational feasibility assesses the extent to which the required
software performs series of steps to solve business problems and user requirements. This
feasibility is dependent on human resources (software development team) and involves
visualizing whether or not the software will operate after it is developed and be operated
NOTES once it is installed. In addition, operational feasibility performs the tasks listed below:
• Determines whether the problems proposed in user requirements are of high priority or
not.
• Determines whether the solution suggested by software development team is acceptable
or not.
• Analyzes whether users will adapt to new software or not.
• Determines whether the organization is satisfied by the alternative solutions proposed by
software development team or not.
(c) Economic Feasibility: Economic feasibility determines whether the required software
is capable of generating financial gains for an organization or not. It involves the cost
incurred on software development team, estimated cost of hardware and software, cost of
performing feasibility study, and so on. For this, it is essential to consider expenses made
on purchases (such as hardware purchase) and activities required to carry out software
development. In addition, it is necessary to consider the benefits that can be achieved by
developing the software.
Software is said to be economically feasible if it focuses on the issues listed below:
• Cost incurred on software development produces long-term gains for an organization.
• Cost required to conduct full software investigation (such as requirements elicitation
and requirements analysis).
• Cost of hardware, software, development team, and training.
Note: As economic feasibility assesses cost and benefits of the software, cost-benefit analysis
is performed for it. Economic feasibility uses several methods to perform cost-benefit
analysis, such as payback analysis, return on investment (ROI), and present value analysis.
72 Self-Instructional Material
System Analysis
• General information: Describes the purpose and scope of feasibility study. It also
describes system overview, acronyms and abbreviations, and points of contact to be
used. System overview provides description about the name of organization responsible
for software development, system name or title, system category, operational status,
and so on. Project references provide a list for the references used to prepare this
document, such as documents relating to the project or previously developed documents
that are related to the project. Acronyms and abbreviations provide a list of the terms
that are used in this document along with their meanings. Points of contact provide a
list of points of organizational contact with users for information and coordination. For
example, users require assistance to solve problems (such as troubleshooting) and collect
information, such as contact number, E-mail address, and so on.
• Management summary: Provides the information listed below:
Environment: Identifies the individuals responsible for software development. It
provides information about input and output requirements, processing requirements
of software, and the interaction of software with other software. In addition, it also
identifies system security requirements and system’s processing requirements.
Self-Instructional Material 73
Software Engineering Current functional procedures: Describes the current functional procedures of
an existing system, whether automated or manual. It also includes the data flow of
current system and the number of team members required to operate and maintain
the software.
NOTES Functional objective: Provides information about functions of the system, such as
new services, increased capacity, and so on.
Performance objective: Provides information about performance objectives, such
as reduced staff and equipment cost, increased processing speed of software, and
improved controls.
Assumptions and constraint: Provides information about assumptions and
constraints, such as operational life of the proposed software, financial constraints,
changing hardware, software and operating environment, and availability of information
and sources.
Methodology: Describes the methods that are applied to evaluate the proposed
software in order to reach a feasible alternative. These methods include survey,
modelling, benchmarking, and so on.
Evaluation criteria: Identifies the criteria, such as cost, priority, development time,
and ease of system use. The criteria are applicable for the development process to
determine the most suitable system option.
Recommendation: Describes a recommendation for the proposed system. This
includes the delays and acceptable risks.
• Proposed software: Describes the overall concept of the system as well as the procedure
to be used to meet user requirements. In addition, it provides information about
improvements, time and resource costs, and impacts. Improvements are performed to
enhance functionality and performance of existing software. Time and resource costs
include the costs associated with software development from its requirement to its
maintenance and staff training. Impacts describe the possibility of future happenings
and include various types of impacts, which are listed below:
Equipment impacts: Determine new equipment requirements and changes to be
made in the currently available equipment requirements.
Software impacts: Specify any additions or modifications required in the existing
software and supporting software to adapt to the proposed software.
Organizational impacts: Describe any changes in organization, staff, and skills
requirement.
Operational impacts: Describe effects on operations, such as user operating
procedures, data processing, data entry procedures, and so on.
Check Your Progress Developmental impacts: Specify developmental impacts, such as resources required
3. What are the objectives of
to develop databases, resources required to develop and test the software, and specific
feasibility study? activities to be performed by user during software development.
4. List the task are performed Security impacts: Describe security factors that may influence the development,
by operational feasibility. design, and continued operation of the proposed software.
5. What is the role of • Alternative systems: Provide description of alternative systems, which are considered
information assessment in
feasibility study process?
in feasibility study. It also describes the reasons for choosing a particular alternative
system to develop the proposed software and the reason for rejecting other alternative
6. Explain briefly functional
objective and performance systems.
objective of feasibility
study plan.
3.4 REQUIREMENTS ELICITATION
Requirements elicitation (also known as requirements capture or requirements
74 Self-Instructional Material acquisition) is a process of collecting information about software requirements from different
individuals, such as users and other stakeholders. Stakeholders are individuals who are System Analysis
affected by the system, directly or indirectly. These include project managers, marketing
people, consultants, software engineers, maintenance engineers, and user.
Various issues may arise during requirements elicitation and may cause difficulty in
understanding the software requirements. Some of the problems are listed below: NOTES
• Problems of scope: This problem arises when the boundary of software (that is, scope)
is not defined properly. Due to this, it becomes difficult to identify objectives as well as
functions and features to be accomplished in software.
• Problems of understanding: This problem arises when users are not certain about
their requirements and thus are unable to express what they require in software and
which requirements are feasible. This problem also arises when users have no or little
knowledge of the problem domain and are unable to understand the limitations of
computing environment of software.
• Problems of volatility: This problem arises when requirements change over time.
Requirements elicitation uses elicitation techniques, which facilitate a software engineer to
understand user requirements and software requirements needed to develop the proposed
software.
Analysis
Model
Structured Other
Analysis Approaches
Object-Oriented
Modelling
(a) Data Flow Diagram (DFD): IEEE defines data flow diagram (also known as bubble
chart or work flow diagram) as “a diagram that depicts data sources, data sinks, data
storage, and processes performed on data as nodes, and logical flow of data as links
between the nodes”. DFD allows software development team to depict flow of data from
one or more processes to another. In addition, DFD accomplishes the objectives listed
below:
• Represents system data in hierarchical manner and with required level of detail.
• Depicts processes according to defined user requirements and software scope.
DFD depicts the flow of data within a system and considers a system that transforms
inputs into the required outputs. When there is complexity in a system, data needs to be
transformed by using various steps to produce an output. These steps are required to refine
the information. The objective of DFD is to provide an overview of the transformations
that occur to the input data within the system in order to produce an output.
DFD should not be confused with a flowchart. A DFD represents the flow of data whereas
flowchart depicts the flow of control. Also, a DFD does not depict the information about
the procedure to be used for accomplishing the task. Hence, while making DFD, procedural
details about the processes should not be shown. DFD helps a software designer to describe
the transformations taking place in the path of data from input to output
DFD comprises of four basic notations (symbols), which help to depict information in a
system. These notations are listed in Table 3.2.
Data store Indicates the place for storing information within the
system.
While creating a DFD, certain guidelines are followed to depict the data flow of system
requirements effectively. These guidelines help to create DFD in an understandable manner.
The commonly followed guidelines for creating DFD are listed below:
• DFD notations should be given meaningful names. For example, verb should be used for
naming a process whereas nouns should be used for naming external entity, data store,
and data flow.
• Abbreviations should be avoided in DFD notations.
• Each process should be numbered uniquely but the numbering should be consistent.
78 Self-Instructional Material
• DFD should be created in an organized manner so that it is easily understandable. System Analysis
• Unnecessary notations should be avoided in DFD in order to avoid complexity.
• DFD should be logically consistent. For this, processes without any input or output and
any input without output should be avoided.
NOTES
• There should be no loops in DFD.
• DFD should be refined until each process performs a simple function so that it can be
easily represented as a program component.
• DFD should be organized in a series of levels so that each level provides more detail than
the previous level.
• The name of a process should be carried to the next level of DFD.
• Each DFD should not have more than six processes and related data stores.
• The data store should be depicted at the context level where it first describes an interface
between two or more processes. Then, the data store should be depicted again in the
next level of DFD that describes the related processes.
There are various levels of DFD, which provide detail about the input, processes, and
output of a system. Note that the level of detail of process increases with increase in
level(s). However, these levels do not describe the systems’ internal structure or behaviour.
These levels are listed below:
• Level 0 DFD (also known as context diagram): Shows an overall view of the system.
• Level 1 DFD: Elaborates level 0 DFD and splits the process into a detailed form.
• Level 2 DFD: Elaborates level 1 DFD and displays the process(s) in a more detailed
form.
• Level 3 DFD: Elaborates level 2 DFD and displays the process(s) in a detailed form.
To understand various levels of DFD, let us consider an example of banking system. In
Figure 3.8, level 0 DFD is drawn, this DFD represents how ‘user’ entity interacts with
‘banking system’ process and avails its services. The level 0 DFD depicts the entire banking
system as a single process. There are various tasks performed in a bank, such as transaction
processing, pass book entry, registration, demand draft creation, and online help. The data
flow indicates that these tasks are performed by both the user and bank. Once the user
performs transaction, the bank verifies whether the user is registered in the bank or not.
User info
Verify User
User Banking System User
Transaction
Registration
Online Help
Account no.,
Date
1 2
Cash info
8
Receipt
Amount of cash User
Data Account
no., date
DD detail
3.1 3.4
Information
Check Acco- Provide NOTES
unt Status Statement
Cheque
3.2 3.3
Generally, it is considered that object-oriented systems are easier to develop and maintain.
Also, it is considered that the transition from object-oriented analysis to object-oriented
design can be done easily. This is because object-oriented analysis is resilient to changes as
objects are more stable than functions that are used in structured analysis. Note that object-
oriented analysis comprises a number of steps, which includes identifying objects, identifying
structures, identifying attributes, identifying associations, and defining services.
82 Self-Instructional Material
System Analysis
Step 5 Defining
Services
Step 3 Identifying
Attributes
Step 2 Identifying
Structures
Step 1 Identifying
Objects
(a) Identifying Objects: While performing analysis, an object encapsulates the attributes
on which it provides the services. Note that an object represents entities in a problem
domain. The identification of the objects starts by viewing the problem space and its
description. Then, a summary of the problem space is gathered to consider the ‘nouns’.
Nouns indicate the entities used in problem space and which will further be modelled as
objects. Some examples of nouns that can be modelled as objects are structures, events,
roles, and locations.
(b) Identifying Structures: Structures depict the hierarchies that exist between the objects.
Object modelling applies the concept of generalization and specialization to define hierarchies
and to represent the relationships between the objects. As mentioned earlier, superclass is a
collection of classes, which can further be refined into one or more subclasses. Note that
a subclass can have its own attributes and services apart from the attributes and services
inherited from its superclass. To understand generalization and specialization, consider an
example of class ‘car’. Here, ‘car’ is a superclass, which has attributes, such as wheels,
doors, and windows. There may be one or more subclasses of a superclass. For instance,
superclass ‘car’ has subclasses ‘Mercedes’ and ‘Toyota’, which have the inherited attributes
along with their own attributes, such as comfort, locking system, and so on.
It is essential to consider the objects that can be identified as generalization so that the
classification of structure can be identified. In addition, the objects in the problem domain
should be determined to check whether they can be classified into specialization or not.
Note that the specialization should be meaningful for the problem domain.
(c) Identifying Attributes: Attributes add details about an object and store the data for the
object. For example, the class ‘book’ has attributes, such as author name, ISBN, and
publication house. The data about these attributes is stored in the form of values and are
hidden from outside the objects. However, these attributes are accessed and manipulated
by the service functions used for that object. The attributes to be considered about an
object depend on the problem and the requirement for that attribute. For example, while
modelling the student admission system, attributes, such as age and qualification are required
for the object ‘student’. On the other hand, while modelling for hospital management
system, the attribute ‘qualification’ is unnecessary and requires other attributes of class
‘student’, such as gender, height, and weight. In short, it can be said that while using an
object, only the attributes that are relevant and required by the problem domain should be
considered.
(d) Identifying Associations: Associations describe the relationship between the instances
Self-Instructional Material 83
of several classes. For example, an instance of class ‘University’ is related to an instance of
Software Engineering class ‘person’ by ‘educates’ relationship. Note that there is no relationship between the
class ‘University’ and class ‘person’, however only the instance(s) of class ‘person’ (that
is, student) is related to class ‘University’. This is similar to entity relationship modelling,
where one instance can be related by 1:1, 1:M, and M:M relationships.
NOTES An association may have its own attributes, which may or may not be present in other
objects. Depending on the requirement, the attributes of the association can be ‘forced’ to
belong to one or more objects without losing the information. However, this should not be
done unless the attribute itself belongs to that object.
(e) Defining Services: As mentioned earlier, an object performs some services. These
services are carried out when an object receives a message for it. Services are a medium to
change the state of an object or carry out a process. These services describe the tasks and
processes provided by a system. It is important to consider the ‘occur’ services in order to
create, destroy, and maintain the instances of an object. To identify the services, the system
states are defined and then the external events and the required responses are described.
For this, the services provided by objects should be considered.
86 Self-Instructional Material
Last System Analysis
Name
First
Name Address
Account Balance
Number Data Contact
Name
Number NOTES
Account
Type
Saving Current
Account Account Performs
ATM
Transaction
Over-draft
Account
ATM ATM Cash
Number Limit
Deposit Deposit
Cheque Cash
ATM
Place Withdraw
Cash
Cardinality and Modality: Although data objects, data attributes, and relationships are
essential for structured analysis, additional information about them is required to understand
the information domain of the problem. This information includes cardinality and modality.
Cardinality specifies the number of occurrences (instances) of one data object or entity
that relates to the number of occurrence of another data object or entity. It also specifies
the number of entities that are included in a relationship. Modality describes the possibility
whether a relationship between two or more entities and data objects is required or not. The
modality of a relationship is 0 if the relationship is optional. However, the modality is 1 if an
occurrence of the relationship is essential.
To understand the concept of cardinality and modality properly, let us consider an example.
In Figure 3.16, user entity is related to order entity. Here, cardinality for ‘user’ entity
indicates that user places an order, whereas modality for ‘user’ entity indicates that it is
necessary for a user to place an order. Cardinality for ‘order’ indicates that a single user
can place many orders, whereas modality for ‘order’ entity indicates that a user can arrive
without any ‘order’.
Modality: Modality:
Customer is Customer can
required to arrive without
have an order any order
Self-Instructional Material 87
Software Engineering
3.6 REQUIREMENTS SPECIFICATION
The output of requirements phase of software development process is the software
requirement specification document (also known as requirements document). This
NOTES document lays a foundation for software engineering activities and is created when entire
requirements are elicited and analyzed. Software requirement specification (SRS) is a formal
document, which acts as a representation of software that enables the users to review
whether it (SRS) is according to their requirements or not. In addition, the requirements
document includes user requirement for a system as well as detailed specification of the
system requirement.
IEEE defines software requirement specification as “a document that clearly and precisely
describes each of the essential requirements (functions, performance, design constraints,
and quality attributes) of the software and the external interfaces. Each requirement is
defined in such a way that its achievement can be objectively verified by a prescribed
method, for example, inspection, demonstration, analysis, or test.” Note that requirement
specification can be in the form of a written document, a mathematical model, a collection
of graphical models, a prototype, and so on.
Essentially, what passes from requirement analysis activity to the specification activity is
the knowledge acquired about the system. The need for maintaining requirements document
is that the modelling activity essentially focuses on the problem structure and not its structural
behaviour. While in SRS, performance constraints, design constraints, standard compliance
recovery are clearly specified in the requirements document. This information helps in
properly developed design of a system. Various other purposes served by SRS are listed
below:
• Feedback: Provides a feedback, which ensures to the user that the organization (which
develops the software) understands the issues or problems to be solved and the software
behaviour necessary to address those problems.
• Decompose problem into components: Organises the information and divides the
problem into its component parts in an orderly manner.
• Validation: Uses validation strategies, applied to the requirements to acknowledge that
requirements are stated properly.
• Input to design: Contains sufficient detail in the functional system requirements to
devise a design solution.
• Basis for agreement between user and organization: Provides a complete description
of the functions to be performed by the system. In addition, it helps the users to determine
whether the specified requirements are accomplished or not.
• Reduce the development effort: Enables developers to consider user requirements
before the designing of the system commences. As a result, ‘rework’ and inconsistencies
in the later stages can be reduced.
• Estimating costs and schedules: Determines the requirements of the system and thus
enable the developer to have a ‘rough’ estimate of the total cost and the schedule of the
project.
Requirements document is used by various individuals in the organization As shown in
Figure 3.17, system customers needs SRS to specify and verify whether requirements
meet the desired needs or not. In addition, SRS enables the managers to plan for the system
development processes. System engineers need requirements document to understand what
system is to be developed. These engineers also require this document to develop validation
test for the required system. Lastly, requirements document is required by system
88 Self-Instructional Material maintenance engineers to use the requirement and the relationship between its parts.
Requirements document has diverse users, therefore along with communicating the System Analysis
requirements to the users it also has to define the requirements in precise detail for developers
and testers. In addition it should also include information about possible changes in the
system, which can help system designers to avoid restricted decisions on design. SRS also
helps maintenance engineers to adapt the system to new requirements. NOTES
rs
Requirement
Enginee
a re
ftw
So
stem
t
en
Sy
um
oc
Sys
Spe
cification D
emt
us
C
to me ag ers
rs Man
90 Self-Instructional Material
System Analysis
1.0 Introduction
1.1 Purposes
1.2 Scope
1.3 Definitions, Acronyms, and Abbreviations
1.4 References
1.5 Overview NOTES
2.0 The Overall Description
2.1 Product Perspective
2.1.1 System Interface
2.1.2 Interface
2.1.3 Hardware Interface
2.1.4 Software Interface
2.1.5 Communications Interface
2.1.6 Memory Constraints
2.1.7 Operations
2.1.8 Site Adaptation Requirements
2.2 Product Functions
2.3 User Characteristics
2.4 Constraints
2.5 Assumptions and Dependency
2.6 Apportioning of Requirements
3.0 Specific Requirements
3.1 External Interface
3.2 Functions
3.3 Performance Requirements
3.4 Logical Database of Requirement
3.5 Design Constraints
3.5.1 Standards Compliance
3.6 Software System Attributes
3.6.1 Reliability
3.6.2 Availability
3.6.3 Security
3.6.4 Maintainability
3.6.5 Portability
3.7 Organizing the Specific Requirements
3.7.1 System Mode
3.7.2 User Class
3.7.3 Objects
3.7.4 Feature
3.7.5 Stimulus
3.7.6 Response
3.7.7 Functional Hierarchy
3.8 Additional Comments
4.0 Change Management Process
5.0 Document Approvals
6.0 Supporting Information
92 Self-Instructional Material
• Change management process: Determines the change management process in order System Analysis
to identify, evaluate and update SRS to reflect changes in the project scope and
requirements.
• Document approvals: Provide information about the approvers of the SRS document
with the details, such as approver’s name, signature, date and so on. NOTES
• Supporting information: Provide information, such as table of contents, index and so
on. This is necessary especially when SRS is prepared for large and complex projects.
Requirements
Document List of
Problems
Organizational Requirements
Knowledge
Validation Agreed
Organizational Actions
Standards
In Figure 3.19, various inputs, such as requirements document, organizational knowledge, Check Your Progress
and organizational standards are shown. The requirements document should be formulated 12. Define Software
and organized according to the standards of the organization. The organizational knowledge requirement specification
is used to estimate the realism of the requirements of the system. The organizational (SRS).
13. List the guidelines for
standards are the specified standards followed by the organization according to which the preparing SRS.
system is to be developed.
Self-Instructional Material 93
Software Engineering The output of requirement validation is a list of problems and agreed actions of the problems.
The lists of problems indicate the problems encountered in the requirements document of
the requirement validation process. The agreed action is a list that displays the actions to
be performed to resolve the problems depicted in the problem list.
NOTES
3.7.1 Requirement Review
Requirements validation determines whether the requirements are substantial to design the
system or not. Various problems are encountered during requirements validation. These
problems are listed below:
• Unclear stated requirements.
• Conflicting requirements are not detected during requirements analysis.
• Errors in the requirements elicitation and analysis.
• Lack of conformance to quality standards.
To avoid the problems stated above, a requirement review is conducted, which consists
of a review team that performs a systematic analysis of the requirements. The review team
consists of software engineers, users, and other stakeholders who examine the specification
to ensure that the problems associated with consistency, omissions and errors detected and
corrected. In addition, the review team checks whether the work products produced during
requirements phase conform to standards specified for the process, project and the product
or not.
In review meeting, each participant goes over the requirements before the meeting starts
and marks the items, which are dubious or they feel need for further clarification. Checklists
are often used for identifying such items. Checklists ensure that no source of errors whether
major or minor are overlooked by the reviewers. A ‘good’ checklist consists of the following:
• Is the initial state of the system defined?
• Does a conflict between one requirement and the other exist?
• Are all requirements specified at the appropriate level of abstraction?
• Is the requirement necessary or does it represent an add-on feature that may not be
essentially implemented?
• Is the requirement bounded and have a clear defined meaning?
• Is each requirement feasible in the technical environment where the product or system
is to be used?
• Is testing possible, once requirement is implemented?
• Are requirements associated with performance, behaviour, and operational characteristics
clearly stated?
• Are requirement pattern used to simplify the requirements model?
• Are the requirements consistent with overall objective specified for the system/product?
• Have all hardware resources been defined?
• Is provision for possible future modifications specified?
• Are functions included as desired by the user (and stakeholder)?
• Can the requirements be implemented in the available budget and technology?
• Are the resources of requirements or any system model (created) stated clearly?
94 Self-Instructional Material
The checklists ensure that the requirements reflect users needs and that requirements System Analysis
provide ‘groundwork’ for design. Using checklist, the participants specify the list of potential
errors they have uncovered. Lastly, the requirement analyst either agrees to the presence of
errors or clarifies that no errors exist.
NOTES
3.7.2 Other Requirement Validation Techniques
A number of other requirement validation techniques are used either individually or in
conjunction with other techniques to check the entire system or parts of the system. The
selection of the validation technique depends on the appropriateness and the size of the
system to be developed. Some of these techniques are listed below:
• Test case generation: The requirements specified in the SRS document should be
testable. The test in the validation process can reveal problems in the requirement. In
some cases test becomes difficult to design, which implies that requirement is difficult
to implement and requires improvement.
• Automated consistency analysis: If the requirements are expressed in the form of
structured or formal notation, then computer aided software engineering (CASE) tools
can be used to check the consistency of the system. A requirements database is created
using a CASE tool that checks the entire requirements in the database using rules of
method or notation. The report of all inconsistencies is identified and managed.
• Prototyping: Prototyping is normally used for validating and eliciting new requirements
of the system. This helps to interpret assumptions and provide an appropriate feedback
about the requirements to the user. For example, if users have approved a prototype,
which consists of graphical user interface, then the user interface can be considered
validated.
ts Re
en q
emtion An uir
a
Eli uir
a
em sis
q
cit
ly
Re
en
ts
Requirements
Management
ion s
Re Va
Change
at ent
i Management m
lid rem ire
ati en e qu cific
on ts Requirements R pe
Attributes S 14. What are the objectives to
Requirements prepare software
Tracing validation?
15. What is the output of
requirement validation
Figure 3.20 Requirements Management phase?
Requirements management can be defined as a process of eliciting, documenting, organizing
and controlling changes to the requirements. Generally, the process of requirements
management begins as soon as requirements document is available, but ‘planning’ for Self-Instructional Material 95
Software Engineering managing the changing requirements should start during requirement elicitation process.
The essential activities performed in requirements management are listed below:
• Recognises the need of the change to the requirements.
• Establishes a relationship amongst stakeholders and involve them in the requirements
NOTES engineering process.
• Identifies and tracks requirements attributes.
Requirements management enables the development team to identify, control, and track
requirements and changes that occur as the software development process progresses.
Other advantages associated with the requirements management are listed below:
• Better control of complex projects: Provides the development team with a clear
understanding of what, when and why software is to be delivered. The resources are
allocated according to user-driven priorities and relative implementation effort.
• Improves software quality: Ensures that the software performs according to
requirements to enhance software quality. This can be achieved when the developers
and testers have a concise understanding of what to develop and test.
• Reduced project costs and delays: Minimizes errors early in the development cycle, as
it is expensive to ‘fix’ errors at the later stages of the development cycle. As a result, the
project costs also reduce.
• Improved team communications: Facilitates early involvement of users to ensure that
their needs are achieved.
• Easing compliance with standards and regulations: Ensures that standards involved
with software compliance and process improvement have thorough understanding of
requirement management. For example, capability maturity model (CMM) addresses
requirements management as one of the first steps to improve software quality.
All the user requirements are specified in the software requirement specification. The project
manager as part of requirements management tracks the requirement for the current project
and those requirements, which are planned for the next release.
Features traceability Indicates how requirements relate to important features specified by the
user. NOTES
Source traceability Identifies the source of each requirement by linking the requirements to
the stakeholders who proposed them. When a change is proposed,
information from this table can be used to find and consult the
stakeholders.
Requirement traceability Indicates how dependent requirements in the SRS are related to one
another. Information from this table can be used to evaluate the number
of requirements that will be affected due to the proposed change(s).
Design traceability Links the requirements to the design modules where these requirements
are implemented. Information from this table can be used to evaluate the
impact of proposed requirements changes on the software design and
implementation.
Interface traceability Indicates how requirements are related to internal interface and external
interface and external interface of a system.
Note that tracing matrix is useful when less number of requirements are to be managed.
However, traceability matrices are expensive to maintain when a large system with large
requirement is to be developed. This is because large requirements are not easy to manage.
Due to this, the traceability information of large system is stored in the ‘requirement database’
where each requirement is explicitly linked to related requirements. This helps to assess,
how a change in one requirement affects the different aspects of the system to be developed.
1.1 U R
1.2 U R U
1.3 R R
2.1 R U U
2.2 U
2.3 R U
3.1 R
3.2 R
Identified Revised
Problem Problem Analysis Change Analysis Requirements
Change
and Change and
Implementation
Specilization Costing
98 Self-Instructional Material
Project: Students need to submit a project in the IVth semester. This project carries System Analysis
100 marks where student obtaining 50% or more (>= 50 marks) is said to have
passed. Also, students are required to appear for a viva-voce session, which will be
related to the project.
• Result generation module: The result is declared on the university’s website. This NOTES
website contains mark sheets of the students who have appeared in the examination of
the said semester (for which registration fee has been paid). Note that to view the result
student can use enrolment number as password.
Registration
Student
Admission
Student and Examination
Examination
System
Result Generation
Registration
Application Enrolment
for registration no. alloted
Student
Information 5
entry Student
Administrator Information
Management
1.1
Admission Student
registered
Student Coordinator
Mark-sheet
3.1 3.2
Semester
Marks result
Information Result Report
Management Generation
Student detail
Marks detail
1.2.1
Fetch
Student
Information
Student detail
1.2.2
Check Seme-
ster and
Registration
Internal
Code
marks
Type
External Subject
Takes Student Takes
marks
Semester
Marks Enrolment
no. Name
Examination
Year
Total Pass/Fail
marks
The software interfaces that will be used for the proposed system are listed
below:
Windows-based operating system (such as, Windows 95/98/XP/NT).
NOTES
Oracle 8i as the database management system (DBMS) to store files and
other related information.
Crystal reports 8 to generate and view reports.
Visual Basic 6.0 as a front-end tool for coding and designing the software.
Internet Explorer 5.5 or higher to view results of the examination on the
Internet.
(v) Communication interface
None
(vi) Memory constraints
Intel Pentium III processor or higher with a minimum of 128 MB RAM and
600 MB of hard disk space will be required so that software performs its
functions in an optimum manner.
(vii) Operations
The software release will not include automated and maintenance of database.
The university is responsible for manually deleting old/outdated data and
managing backup and recovery of data.
(viii) Site adaptations requirements
The terminals at the user’s end will have to support the interfaces (both
hardware and software) as mentioned above.
(b) Product functions
The system will allow access only to authorized users like student, administrator
and coordinator depending upon the role. Some of the functions that will be
performed by the software are listed below:
Login facility for authorized users.
Perform modification (by administrator only), such as adding or deleting the
marks obtained by the students.
Provide a printable version of the mark-sheet (result) of the students.
Use of ‘clear’ function to delete the existing information in the database.
(c) User characteristics
None.
(d) Constraints
As Oracle 8i is a powerful database, it can store a large number of records.
The university should have a security policy to maintain information related to
marks, which are to be modified by administrator.
(e) Assumptions and dependencies
The subjects taken by the students in the semester will not change.
When the requirements are gathered according to the user, SRS is then finally reviewed,
approved, and signed by the developer and user (university). This SRS serves as a
contract for software development activities.
NOTES
6. Supporting information
None.
Cash detail = * Entity. Stores the details of the cash deposited or withdrawn *
Amount +
No_of_notes +
Total_amount
4.0 INTRODUCTION
Once the requirements document for the software to be developed is available, the software
design phase begins. While the requirement specification activity deals entirely with the
problem domain, design is the first phase of transforming the problem into a solution. In
design phase, customer and business requirements and technical considerations all come
together to formulate a product or a system.
Design process comprises of a set of principles, concepts, and practices, which allows a
software engineer to model the system or product that is to be built. This model known as
design model is assessed for quality and reviewed before code is generated and tests are
conducted. The design model provides detail about software data structures, architecture,
interfaces, and components, which are required to implement the system. This chapter
discusses the design elements required to develop a software design model. It also discusses
the design patterns, design notations and design documentation used to represent software
design.
Programming
Traceable to Adapt Paradigm
Analysis Change
Model Uniformity
Minimise and Infegration
Conceptual
(Semantic)
Errors
Degrade
Genthy
Minimise the
Intellectual
distance Code
in the Real Reuse
World
Designing for
Testability Prototyping
• Software design should ‘minimise the intellectual distance’ between the software
and problem existing in the real world: The design structure should be such that it
always relates with the real-world problem.
• Code reuse: There is a common saying among software engineers: ‘do not reinvent the
wheel’. Therefore, existing design hierarchies should be effectively reused to increase
productivity.
• Designing for testability: A common practice that has been followed is to separate
testing from design and implementation. That is, the software is designed, implemented,
and then handed over to the testers, who subsequently determine whether or not the
software is fit for distribution and subsequent use by the customer. However, it has
become apparent that the process of separating testing is seriously flawed, as discovering
these types of errors after implementation usually requires the entire or a substantial part
of the software to be redone. Thus, the test engineers should be involved from the very
beginning. For example, they should work with the requirements analysts to devise tests
that will determine whether the software meets the requirements or not.
Self-Instructional Material 115
Software Engineering • Prototyping: Prototyping should be used to explore those aspects of the requirements,
user interface, or software’s internal design, which are not easily understandable. Using
prototyping a quick ‘mock-up’ of the system can be developed. This mock up can be
used as a highly effective means to highlight misconceptions and reveal hidden assumptions
NOTES about the user interface and how the software should perform. Prototyping also reduces
the risk of designing software that does not fulfil customer’s requirements.
Note that design principles are often constrained by the existing hardware configuration,
the implementation language, the existing file and data structures, and the existing
organizational practices. Also, the evolution of each software design should be meticulously
designed for future evaluations, references, and maintenance.
ram
Global Main Prog Function
Data
Function Function
Function
Module Data
Function
Function Function
Function
Module Data
Module
Algorithm
Controlled Data Structure
NOTES
Interface
Details of External Interface
Clients
A Specific
“Secret” Design Decision
Some of the advantages associated with information hiding are listed below:
• Leads to low coupling.
• Emphasises communication through controlled interfaces.
• Reduces the likelihood of adverse effects.
• Limits the global impact of local design decisions.
• Results in higher quality software.
(g) Stepwise Refinement: Stepwise refinement is a top-down design strategy used for
decomposing a system from a high level of abstraction into a more detailed level (lower
level) of abstraction. At the highest level of abstraction, function or information is defined
conceptually without providing any information about the internal workings of the function
or internal structure of the data. As we proceed towards the lower levels of abstraction,
more and more details are available.
Software designers start the stepwise refinement process by creating a sequence of
compositions for the system being designed. Each composition is more detailed than the
previous one and contains more components and interactions. The earlier compositions
represent the significant interactions within the system, while the later compositions show
in detail how these interactions are achieved.
To have a clear understanding of the concept, let us consider an example of stepwise
refinement. Every computer program comprises of inputs, process, and output.
• INPUT
Get user’s name (string) through a prompt
Get user’s grade (integer from 0 to 100) through a prompt and validate
• PROCESS
• OUTPUT
This is the first step in refinement. The input phase can be refined further as follows.
• INPUT
Get user’s name through a prompt
Get user’s grade through a prompt
While (invalid grade)
Ask again
• PROCESS
• OUTPUT
Note: Stepwise refinement can also be performed for PROCESS and OUTPUT phase. Self-Instructional Material 119
Software Engineering (h) Refactoring: Refactoring is an important design activity that simplifies design of a
module without changing its behaviour or function. Refactoring can be defined as a process
of modifying a software system to improve the internal structure of the design without
changing its external behaviour. During refactoring process, the existing design is checked
NOTES for unused design elements, redundancy, inefficient or poorly constructed algorithms and
data structures, or any other flaws in the existing design that can be improved to yield a
better design. For example, a design model might yield a component, which exhibits low
cohesion (like a component performs only four functions that have a limited relationship
with one another). Software designers may decide to refactor the component into four
different components, each exhibiting high cohesion. This results in software that is easier
to integrate, test, and maintain.
(i) Structural Partitioning: When the architectural style of a design follows a hierarchical
nature, the structure of the program can be partitioned either horizontally or vertically. In
horizontal partitioning, the control modules (shaded boxes in Figure 4.4 (a)) are used to
communicate between functions and execute the functions. Structural partitioning provides
the following benefits:
• Results in software that is easier to test and maintain.
• Results in less propagation of adverse affects.
• Results in software that is easier to extend.
However, the disadvantage of using horizontal partitioning is that more data has to be
passed across module interface. This complicates the overall control flow of the problem
especially while processing rapid movement from one function to another.
Function 1 Function 3
Decision-making
Modules
“Worker”
Modules
Function 2
(a) Horizontal Partitioning (b) Vertical Partitioning
In vertical partitioning, the control (decision-making) modules are located at the top and
work is distributed in a top-down manner. That is, top-level modules perform control
function and do little processing, while low-level modules perform all input, computation
and output tasks.
Filter
Each filter works as an independent entity, which may not know the identity of upstream or
downstream filters. They may specify input format and guarantee what appears as an
output, but they may not know which components appear at the ends of those pipes. A
degenerated version occurs when each filter processes all of its input as a single entity.
This is known as batch sequential system. In these systems, pipes no longer provide a
stream of data. The best-known example of data flow architectures is Unix shell programmes
where components are represented as Unix processes and pipes are created through the file
system. Other examples include compilers, signal-processing systems, parallel programming,
functional programming, and distributed systems. Some advantages associated with the
data-flow architecture are listed below:
• Supports reusability.
• Easy to maintain and enhance.
• Supports specialised analysis and concurrent execution.
Some disadvantages associated with the data-flow architecture are listed below:
• Often lead to batch organization of processing.
• Poor for interactive applications.
• Difficult to maintain synchronisation between two related streams.
(b) Object-oriented Architecture In object-oriented architectural style, components of a
system encapsulate data and operations, which are applied to manipulate the data. The
components of this style are the objects and connectors, which operate through procedure
calls (methods). This architectural style has two important characteristics:
• Objects are responsible for maintaining the integrity of a resource.
• Representation of the object is hidden from other objects.
• Hidden implementation details allow object to be changed without affecting the accessing
routine of other objects.
• Data allows designers to decompose problems into collections of interacting agents.
NOTES
(c) Layered Architecture: A layered architecture is organised hierarchically with each layer
providing service to the layer above it and serving as a client to the layer below it. In some
systems, inner layers are hidden from all, except the adjacent outer layer. In this type of
architectural style, connectors are defined by the protocols that determine how layers will
interact. An example of this architectural style is the layered communication protocols
OSI-ISO (open systems interconnection-international organization for standardization)
communication system. In these systems, lower levels describe hardware connections and
higher levels describe application. Layered systems support designs based on increasing
levels of abstraction.
Applica
tion Layer
Presentation Layer Application Layer
Session La er
y
Transport Layer Transport Layer
Network Layer Network Layer
Datalink Layer
Physical Layer
Physical Layer
NOTES
Data Store
Client Software (Repository Client Software
or
Blackboard)
Client Software
(e) Call and Return Architecture: A call and return architecture enables software designers
to achieve a program structure, which can be easily modified. This style consists of the
following two substyles:
• Main program/subprogram architecture: In this, function is decomposed into a control
hierarchy where the main program invokes a number of program components, which in
turn may invoke other components.
Main
Program
Timing Chain
Piston
Tweeter Amplifier
Cohesion Equalizer
Crossover
Tuner
Midrange
Antenna
(a) Coupling: Coupling is the measure of interdependence between one module and another.
Coupling depends on the interface complexity between components, the point at which
entry or reference is made to a module, and the kind of data that passes across an interface.
For better interface and well-structured system, modules should have low coupling, which
minimises the ‘ripple effect’ where changes in one module cause errors in other modules.
Module coupling is categorised into the following types.
• No direct coupling: Two modules are no direct coupled when they are independent of
each other. In Figure 4.11, Module 1 and Module 2 are no directly coupled.
• Data coupling: Two modules are data coupled if they communicate by passing
parameters. In Figure 4.11, Module 1 and Module 3 are data coupled.
Data passed
Module 2 via argument list
(data coupling)
No direct Module 3
coupling
Module 1
Module 4
Data structure
passed via argument list
(stamp coupling)
128 Self-Instructional Material Figure 4.11 No Direct, Data, and Stamp Coupling
• Stamp coupling: Two modules are stamp coupled if they communicate through a Software Design
passed data structure that contains more information than necessary for them to perform
their functions. In Figure 4.11, data structure is passed between modules 1 and 4.
Therefore, Module 1 and Module 4 are stamp coupled.
• Control coupling: Two modules are control coupled if they communicate (passes a NOTES
piece of information intended to control the internal logic) using at least one ‘control
flag’. The control flag is a variable that controls decisions in subordinate or superior
modules. In Figure 4.12, when Module 1 passes control flag to Module 2, Module 1 and
Module 2 are said to be control coupled.
Module 1
Flag
Module 2
Flag C Flag
• Content coupling: Two modules are content coupled if one module changes a statement
in another module, one module references or alters data contained inside another module,
or one-module branches into another module. In Figure 4.13, Modules B and Module D
are content coupled.
• Common coupling: Two modules are common coupled if they both share the same
global data area. In Figure 4.13, Modules C and Module N are common coupled.
Global
Data Area
A
Content
Reference
B C L M
D E F N O P
(b) Cohesion: Cohesion is the measure of strength of the association of elements within a
module. A cohesive module performs a single task within a software procedure, which has
less interaction with procedures in other part of the program. In practice, designer should
avoid low-level of cohesion when designing a module. Generally, low coupling results in
high cohesion and vice versa. The various types of cohesion are listed below:
• Functional cohesion: In this, the elements within the modules contribute to the execution
of one and only one problem related task.
Self-Instructional Material 129
Software Engineering
Module
Elements or Task
NOTES
• Sequential cohesion: In this, the elements within the modules are involved in activities
in such a way that output data from one activity serves as input data to the next activity.
• Communicational cohesion: In this, the elements within the modules perform different
functions, yet each function references the same input or output information.
A B C Module
Data Shared by
Elements A, B and C
• Procedural cohesion: In this, the elements within the modules are involved in different
and possibly unrelated activities.
A B C Module
• Temporal cohesion: In this, the elements within the modules contain unrelated activities
that can be carried out at the same time.
A B C Module
NOTES
Same Time
Elements or Task
• Logical cohesion: In this, the elements within the modules perform similar activities,
which are executed from outside the module.
A B C
Module
Similar Activities
Elements or Task
• Coincidental cohesion: In this, the elements within the modules perform activities
with no meaningful relationship to one another.
A B C Module
Elements or Task
After having discussed various types of cohesions, Figure 4.21 illustrates the procedure,
which can be used in determining the types of module cohesion for software design.
Scale
X Achse Y Achse
Minimum: Option 1
Maximum: Option 1
Ticks:
Cancel OK
6 px
Designing a good and efficient user interface is a common objective among software designers.
But what makes a user interface looks ‘good’? Software designers strive to achieve a good
user interface by following three rules, namely, ease of learning, efficiency of use, and aesthetic
appeal. NOTES
(a) Ease of Learning: Ease of learning describes how quickly and effortlessly users learn
to use the software. Ease of learning is primarily important for new users. However, even
experienced users face a learning experience problem when they attempt to expand their
usage of the product or when they use a new version of the software. Here, the principle
of state visualisation is applied, which states that each change in the behaviour of the
software should be accompanied by a corresponding change in the appearance of the
interface.
Developers of software applications with a very large feature set can expect only few users
to have mastered the entire feature set. Thus, designers of these applications should be
concerned about the ease of learning for otherwise experienced users. Generally to ease the
task of learning, designers make use of the tools listed below:
• Affordance: Provides clues that suggest what a machine or tool can do and how to
use it. For example, the style of a door handle on the doors of many departmental
stores, offices, and shops suggest whether to pull a door or push a door to open. If the
wrong style door handle is used, people struggle with the door. In this way, the door
handle is more than just a tool for physically helping you to open the door; it is also an
affordance showing you how the door opens. Similarly, software designers while
developing user interface should offer hints as to what each part does and how it
works.
• Consistency: Designers strive to maintain consistency within the interface. Every aspect
of the interface, including seemingly minor details, such as font usage and colours is
kept consistent when the behaviour is consistent. Here, principle of coherence (that is
behaviour of the program should be internally and externally consistent) is applied.
Internal consistency means that the program’s behaviour must make “sense” with
respect to other parts of the program. For example, if one attribute of an object (for
example, colour) is modified using a pop-up menu, then it is expected that other attributes
of the object will also be edited in a similar manner. External consistency means that
the program is consistent with the environment in which it runs. This includes consistency
with both the operating system and other suite of applications that run within that
operating system.
(b) Efficiency of Use: Once a user knows how to perform tasks, the next question is how
efficiently can the user solve problems with the software? Efficiency can be evaluated
reasonably only if users are no longer engaged in learning how to do the task and are rather
engaged in performing the task.
Defining an efficient interface requires a deep understanding of the behaviour of target
audience. How frequently do they perform the task? How frequently do they use the interface
devices? How much training do they have? How distracted are they? A few guidelines help
in designing an efficient interface.
• The task should require minimal physical actions. The desire of experienced users for
hot keys and to shortcuts to pull-down menu actions is a well-known example of reducing
the number of actions required to perform a task.
• The task should require minimal mental effort as well. A user interface, which requires
the user to remember specific details will be less popular than one that remembers those
details for the user. Similarly, an interface, which requires the user to make many
Self-Instructional Material 133
Software Engineering decisions, particularly non-trivial decisions, will be less popular than the one that requires
the user to make fewer or simpler decisions.
(c) Aesthetically Pleasing: Today, look and feel is one of the biggest USP (unique selling
point) while designing software. An attractive user interface improves the sales because
NOTES people like to have things that look nice. An attractive user interface makes the user feel
better (as it provides ease of use), while using the product. Many software organisations
focus specifically on designing software, which has attractive look and feel so that they
can lure customers/users towards their product(s).
In addition to the above-mentioned goals, there exist a principle of metaphor which if
applied to the software’s design result in a better and effective way of creating a user
interface. This principle states that a complex software system can be understood easily if
the user interface is created in a way that resembles an already developed system. For
example, the popular Windows operating system uses similar (not same) look and feel in all
of its operating system so that the users are able to use it in a user-friendly manner.
s Analy
mer sts
am a
gr
nd
NOTES
o
Pr
De
Soft war e
signers
Program Design
Critical Design
Review
Review
Software Design
Reviews
Preliminary Design
Review
Cu
sto
mer s ers
s and U
(a) Preliminary Design Review The preliminary design review is a formal inspection of
the high-level architectural design of the software, which is conducted to verify that the
design satisfies the functional and non-functional requirements and is in conformance with
the requirements specified by the users. The purpose is to:
• Ensure that software requirements are reflected in the software architecture.
• Specify whether effective modularity is achieved or not.
• Define interfaces for modules and external system elements.
• Ensure that the data structure is consistent with information domain.
• Ensure that maintainability has been considered.
• Assess the quality factors.
In this review it is verified that the proposed design includes the required hardware and
interfaces with the other parts of the computer-based system. To conduct a preliminary
design review, a review team is formed where each review team member acts as an
independent person authorised to make necessary comments and decisions. This review
team comprises of individuals listed below:
• Customers: Responsible for defining the software’s requirements.
• Moderator: Presides over the review. The moderator encourages discussions, maintains
the main objective throughout the review, settles disputes and gives unbiased observations.
In short, moderator is responsible for smooth functioning of the review.
• Secretary: Secretary is a silent observer who does not take part in the review process
but instead records the main points of the review.
• System designers: These include persons involved in designing of not only the software
but also the entire computer-based system.
• Other stakeholders (developers) not involved in the project: These people provide
an outsider’s idea on the proposed design. This is beneficial as these people can advise
‘fresh’ ideas, address issues of correctness, consistency, and good design practice.
If any discrepancies are noted in the review process then the faults are assessed on the
basis of its severity. That is, if there exists a minor fault then the fault is resolved among the
review team. However, if there exists a major fault, then the review team may agree to
Self-Instructional Material 137
Software Engineering revise the proposed conceptual design. Note that preliminary design review is again
conducted to assess the effectiveness of the revised (new) design.
(b) Critical Design Review: Once the preliminary design review is successfully completed
and the customer(s) is satisfied with the proposed design, critical design review is conducted.
NOTES The purpose of this review is to:
• Ensure that the conceptual and technical designs are free of defects.
• Determine that the design under review satisfies the design requirements established in
the architectural design specifications.
• Critically assess the functionality and maturity of the design.
• Justify the design to outsiders so that the technical design is more clear, effective and
easy to understand.
In this review, diagrams and data are used (sometimes both) to evaluate the alternative
design strategies and how and why the major design decisions have been taken. Just like
the preliminary design review, to carry out critical design review a review team is formed.
In addition to the team members involved in preliminary design review, this team also
comprises of individuals listed below:
• System tester: Understands the technical issues of design and compares with design
created for similar projects
• Analyst: Responsible for writing system documentation.
• Program designers for this project: Understands the design in order to derive detailed
program designs.
Note: This review does not involve customers.
Similar to preliminary design review if any discrepancies are noted in the critical design
review process then the faults are assessed on the basis of its severity. A minor fault is
resolved among the review team. While if there exist a major fault, then the review team
may agree to revise the proposed technical design. Note that critical design review is again
conducted to assess the effectiveness of the revised (new) design.
(c) Program Design Review: On successful completion of critical design review,
program design review is conducted to get feedback on the designs before implementation
(coding) begins. This review is conducted between the designers and developers with the
purpose to:
• Ensure that the detailed design is feasible.
• Ensure that the interface is consistent with architectural design.
• Specify whether design is amenable to implementation language.
• Ensure that structured programming constructs are used throughout.
• Ensures that the implementation team will be able to understand the proposed design.
To conduct a program design review, a review team is formed, which comprises of system
designers, system tester, moderator, secretary, and analyst. In addition, the review team
includes program designers, and developers. The program designers after completing the
program designs present their plans to a team of other designers, analysts, and programmers
for comment and suggestions. Note that a successful program design review presents
considerations relating to coding plans before coding begins.
2. References
3. Definition
4. Purpose of an SDD
Attributes Description
Identification Identifies name of the entity. All the entities have a unique name.
Type Describes the kind of entity. This specifies the nature of the entity.
Purpose Specifies why the entity exists.
Function Specifies what the entity does.
Subordinates Identifies subordinate entity of an entity.
Dependencies Describes relationships that exist between one entity and other entities.
Interface Describes how entities interact among themselves.
Resources Describes elements used by the entity that are external to the design.
Processing Specifies rules used to achieve the specified functions.
Data Identifies data elements that form part of the internal entity.
Decomposition description Partitions the system into design Identification, type, purpose,
entities function, and subordinate
Detail description Describes internal details of the Identification, processing, and data
design entity.
Design Description
Data design Includes an enhanced ERD as well as the data object design and the data
file design.
System architecture design Includes detailed diagrams of the system, server, and client architecture.
Procedural design Includes the functional partitioning from the requirements specifications
document, and goes into great detail to describe each function (module/
component).
User interface design Includes the graphical user interfaces that will be seen by the user when
operating the Higher Education online library system.
Application
Web Server
Server Communication
Layer
Firewall
LAN
Internet Layer
Internet
LEVEL 0
Database
Student/Faculty Login
User ID
User Type
From Level 0 Student/Faculty Display
Internet Browser/ 1 10 Main Menu
Login
LAN Connection
User Name Verification Menu
User ID User Type Selection
PIN
Database
Media Account
3 5
Search Status Check
User
Query Resource ID Account
info info
Database
NOTES
2 Media
Search Menu
Display Query
4 Result
Resource
Data
Info
Search for
Resource Database
Result
NULL
Display Result
“No Matches”
Function 2 Media Search Function Searches the media database for books,
magazines/periodicals, and multi-media.
Function 3 Media Reservation Function Allows users to reserve media resources that
are currently checked out.
Function 4 Account Status Check Function Allows users to check the status of their
library account.
Function 5 Overdue Fee Payment Function Allows users to pay overdue fees through
online banking system.
Function 6 User Account Set-up Function Allows library staff to add, delete, and update
user accounts.
Function 7 Media Check in/Check out Function Allows library staff to check media in and
out.
Function 10 Access Control Function Controls the users level of access and provides
user verification.
PIN
Submit
Perform
Media
Search
Check
Account
Sales
Exit
Title
Author
Subject
ISBN No.
Publication
More Matches...
The user can also check their account status by selecting Check Account Status (Function
4) from the main menu. The user may check the status of media that is currently issued or
reserved to them. The title, author, and due date for each item will be displayed.
Vehicles
Automobiles Pulled-Vehicles
Car Rikshaw
Polymorphism: Polymorphism (from the Greek meaning “having multiple forms”) is the
ability of an entity such as variable, function or a message to be processed in more than one
form. It can also be defined as the property of an object belonging to a same or different
class to respond to the same message or function in a different way. For example, if a
message change_gear is passed to the class vehicles then all the automobiles will behave
alike but the vehicles belonging to the class pulled_vehicles will not respond to the message.
5.0 INTRODUCTION
Software testing is an essential part of software development process, which is used to
identify the correctness, completeness and quality of developed software. It’s main objective
is to detect errors in the software. Errors prevent software from producing outputs according
to user requirements. Errors occur when any aspect of a software product is incomplete,
inconsistent, or incorrect. Errors can be broadly classified into three types, namely,
requirements errors, design errors, and programming errors. To avoid these errors, it is
necessary that: requirements are examined for conformance to user needs, software design
is consistent with the requirements and notational convention, and the source code is
examined for conformance to the requirements specification, design documentation, and
user expectations. All this can be accomplished through efficacious means of software
testing.
Software testing involves activities aimed at evaluating an attribute or capability of a program
or system and ensuring that it meets its required results. It should be noted that testing is
fruitful only if it is performed in a correct manner. Through effective software testing, the
software can be examined for correctness, comprehensiveness, consistency, and adherence
to standards. This helps in delivering high quality software products and lowering maintenance
costs, thus leading to more contented users.
(a) Objectives of Software Testing: Software testing evaluates software by manual and
automated means to ensure that it is functioning in accordance with user requirements.
The main objectives of software testing are listed below:
• To remove errors, which prevent software from producing outputs according to user
require ments?
• To remove errors that lead to software failure.
• To determines whether system meets business and user needs.
• To ensure that software is developed according to user requirements.
• To improve the quality of software by removing maximum possible errors from it.
(b) Testing in Software Development Life Cycle (SDLC): Software testing comprises of
a set of activities, which are planned before testing begins. These activities are carried out
for detecting errors that occur during various phases of SDLC. The role of testing in
software development life cycle is listed in Table 5.1.
Errors
Requirements Conformance
Quality
(c) Bugs, Error, Fault and Failure: The purpose of software testing is to find bugs,
errors, faults, and failures present in the software. Bug is defined as a logical mistake,
which is caused by a software developer while writing the software code. Error is defined
as the difference between the outputs produced by the software and the output desired by
the user (expected output). Fault is defined as the condition that leads to malfunctioning of
the software. Malfunctioning of software is caused due to several reasons, such as change
in the design, architecture, or software code. Defect that causes error in operation or
negative impact is called failure. Failure is defined as the state in which software is unable
to perform a function according to user requirements. Bugs, errors, faults, and failures
prevent software from performing efficiently and hence, cause the software to produce
unexpected outputs. Errors can be present in the software due to the reasons listed below:
• Programming errors: Programmers can make mistakes while developing the source
code.
• Unclear requirements: The user is not clear about the desired requirements or the
developers are unable to understand the user requirements in a clear and concise manner.
• Software complexity: The complexity of current software can be difficult to
comprehend for someone who does not have prior experience in software development.
• Changing requirements: The user may not understand the effects of change. If
there are minor changes or major changes, known and unknown dependencies among
parts of the project are likely to interact and cause problems. This may lead to complexity
of keeping track of changes and ultimately may result in errors.
• Time pressures: Maintaining schedule of software projects is difficult. When deadlines
are not met, the attempt to speed up the work causes errors.
• Poorly documented code: It is difficult to maintain and modify code that is badly
written or poorly documented. This causes errors to occur.
Note: In this chapter, ‘error’ is used as a general term for ‘bugs’, ‘errors’, ‘faults’, and
‘failures’.
(d) Who Performs Testing?: Testing is an organisational issue, which is performed either
by the software developers (who originally developed the software) or by an independent
test group (ITG), which comprises of software testers. The software developers are Self-Instructional Material 159
Software Engineering considered to be the best persons to perform testing as they have the best knowledge about
the software. However, since software developers are involved in the development process,
they may have their own interest to show that the software is error free, meets user
requirements, and is within schedule and budget. This vested interest hinders the process
NOTES of testing.
To avoid this problem, the task of testing is assigned to an independent test group (ITG),
which is responsible to detect errors that may have been neglected by the software developers.
ITG tests the software without any discrimination since the group is not directly involved
in the development process. However, the testing group does not completely take over the
testing process, instead it works with the software developers in the software project to
ensure that testing is performed in an efficient manner. During the testing process, developers
are responsible for correcting the errors uncovered by the testing group.
Generally, an independent testing group forms a part of the software development project
team. This is because the group becomes involved during the specification activity and
stays involved (planning and specifying test procedures) throughout the development process.
• The various advantages and disadvantages associated with independent testing group
are listed in Table 5.2.
Advantages Disadvantages
• Independent testing is typically more efficient • Keeping independent test groups can result in
at detecting defects related to special cases, duplication of effort. For example, the test group
interaction between modules, and system level may use resources to perform tests that have
usability and performance problems. already been performed by the developers.
• Programmers are neither trained, nor motivated • Problem can arise when the test group is not
to test. Thus ITG serves as an immediate solution. physically collocated with the design group.
• Test groups can provide insight into the • The cost of maintaining separate test groups is
reliability of the software before it is delivered very high.
to the user.
To plan and perform testing, software testers should have the knowledge about the function
for which the software has been developed, the inputs and how they can be combined, and
the environment in which the software will eventually operate. This process is time-
consuming and requires technical sophistication and proper planning on the part of the
testers. To achieve technical know-how, testers must not only have good development
skills but also possess knowledge about formal languages, graph theory, and algorithms.
Other factors that should be kept in mind while performing testing are:
• Time available to perform testing.
• Training required acquainting testers about the software.
• Attitude of testers.
• Relationship between testers and developers.
Note: Along with software testers, customers, end-users, and management also play an
important role in software testing.
• Include test cases for invalid and unexpected conditions: Generally, software
produces correct outputs when it is tested using accurate inputs. However, if unexpected
input is given to the software, it may produce erroneous outputs. Hence, test cases that
detect errors even when unexpected and incorrect inputs are specified should be developed.
• Test the modified program to check its expected performance: Sometimes, when
certain modifications are made in software (like adding of new functions) it is possible
that software produces unexpected outputs. Hence, software should be tested to verify
that it performs in the expected manner even after modifications.
5.2.2 Testability
The ease with which a program is tested is known as testability. Testability can be defined
as the degree to which a program facilitates the establishment of test criteria and execution
of tests to determine whether the criteria have been met or not. There are several
characteristics of testability, which are listed below:
• Easy to operate: High quality software can be tested in a better manner. This is because
if software is designed and implemented considering quality, then comparatively fewer
errors will be detected during the execution of tests.
• Observability: Testers can easily identify whether the output generated for certain
input is accurate or not simply by observing it.
• Decomposability: By breaking software into independent modules, problems can be
easily isolated and the modules can be easily tested.
• Stability: Software becomes stable when changes made to the software are controlled
and when the existing tests can still be performed.
• Easy to understand: Software that is easy to understand can be tested in an efficient
manner. Software can be properly understood by gathering maximum information about
it. For example, to have a proper knowledge of software, its documentation can be Self-Instructional Material 161
Software Engineering used, which provides complete information of software code thereby increasing its
clarity and making testing easier. Note that documentation should be easily accessible,
well organised, specific, and accurate.
NOTES
Testability
Easy to Decompo-
Operate sability
Easy to
Stability
Understand
Observa-
bility
Steps in Development of Test Plan: A carefully developed test plan facilitates effective
test execution, proper analysis of errors, and preparation of error report. To develop a test
plan, a number of steps are followed, which are listed below:
1. Set objectives of test plan: Before developing a test plan, it is necessary to understand
its purpose. The objectives of a test plan depend on the objectives of software. For
example, if the objective of software is to accomplish all user requirements, then a test
plan is generated to meet this objective. Thus, it is necessary to determine the objective
of software before identifying the objective of test plan.
2. Develop a test matrix: Test matrix indicates the
components of software that are to be tested. It also specifies
the tests required to test these components. Test matrix is Write the
also used as a test proof to show that a test exists for all Test Plan
4. What is a test plan? • Test environment: Identifies the hardware, software, automated testing tools, operating
5. Briefly describe
system, compliers, and sites required to perform testing. It also identifies the staffing
components of a test plan. and training needs.
• Schedule: Provides detailed schedule of testing activities and defines the responsibilities
to respective people. In addition, it indicates dependencies of testing activities and the
164 Self-Instructional Material time frames for them.
• Approvals and distribution: Identifies the individuals who approve a test plan and its Software Testing
results. It also identifies the people to whom test plan document(s) is distributed.
(a) Test Case Generation: The process of generating test cases helps in locating problems
in the requirements or design of software. To generate a test case, initially a criterion that
evaluates a set of test cases is specified. Then, a set of test cases that satisfy the specified
criterion is generated. There are two methods used to generate test cases, which are listed
below:
• Code based test case generation: This approach, also known as structure based test
case generation is used to analyse the entire software code to generate test cases. It
considers only the actual software code to generate test cases and is not concerned with
the user requirements. Test cases developed using this approach are generally used for
unit testing. These test cases can easily test statements, branches, special values, and
symbols present in the unit being tested.
• Specification based test case generation: This approach uses specifications, which
indicate the functions that are produced by software to generate test cases. In other
words, it considers only the external view of software to generate test cases. Specification
based test case generation is generally used for integration testing and system testing to
ensure that software is performing the required task. Since this approach considers only
the external view of the software, it does not test the design decisions and may not cover
all statements of a program. Moreover, as test cases are derived from specifications, the
errors present in these specifications may remain uncovered.
Several tools known as test case generators are used for generating test cases. In addition
to test case generation, these tools specify the components of software that are to be
tested. An example of test case generator is ‘astra quick test’, which captures business
processes in the visual map and generates data driven tests automatically.
Self-Instructional Material 165
Software Engineering (b) Test Case Specifications : The test plan is not concerned with the details of testing a
unit. Moreover, it does not specify the test cases to be used for testing units. Thus, test
case specification is done in order to test each unit separately. Depending on the testing
method specified in test plan, features of unit that need to be tested are ascertained. The
NOTES overall approach stated in test plan is refined into specific test methods and into the criteria
to be used for evaluation. Based on test methods and criteria, test cases to test the unit are
specified.
For each unit being tested, these test case specifications provide test cases, inputs to be
used in test cases, conditions to be tested by tests cases and outputs expected from test
cases. Generally, test cases are specified before they are used for testing. This is because,
testing has many limitations and effectiveness of testing is highly dependent on the nature
of test cases.
Test case specifications are written in the form of a document. This is because the quality
of test cases needs to be evaluated. To evaluate the quality of test cases, test case review is
done for which a formal document is needed. The review of test case document ensures
that test cases satisfy the chosen criteria and are consistent with the policy specified in the
test plan. The other benefit of specifying test cases formally is that it helps testers to select
a good set of test cases.
Check Your Progress Unit level testing is not just performed once during the software development, rather it is
repeated whenever software is modified or used in a new environment. The other points
6. Define test case.
noted about unit testing are listed below:
7. What is exhaustive testing
and ideal test case? • Each unit is tested in isolation from other parts of a program.
8. Define the role played by • The developers themselves perform unit testing.
test case specification.
• Unit testing makes use of white box testing methods.
Module to
be Tested
Results NOTES
Test Cases
Unit testing is used to verify the code produced during software coding and is responsible
for assessing the correctness of a particular unit of source code. In addition, unit testing
performs the functions listed below:
• Tests all control paths to uncover maximum errors that occur during the execution of
conditions present in the unit being tested.
• Ensures that all statements in the unit are executed at least once.
• Tests data structures (like stacks, queues) that represent relationships among individual
data elements.
• Checks the range of inputs given to units. This is because every input range has a
maximum and minimum value and the input given should be within the range of these
values.
• Ensures that the data entered in variables is of the same data type as defined in the unit.
• Checks all arithmetic calculations present in the unit with all possible combinations of
input values.
(a) Types of Unit Testing : A series of stand-alone tests are conducted during unit testing.
Each test examines an individual component that is new or has been modified. A unit test is
also called a module test because it tests the individual units of code that form part of the
program and eventually the software. In a conventional structured programming language,
such as C, the basic unit is a function or sub-routine while, in object-oriented language
such as C++ the basic unit is a class.
The various tests that are performed as a part of unit testing are listed below:
• Module interface: These are tested to ensure that information flows in a proper manner
into and out of the ‘unit’ under test. Note that test of data flow (across a module
interface) is required before any other test is initiated.
• Local data structure: These are tested to ensure that the temporarily stored data maintains
its integrity while an algorithm is being executed.
Module to
be Tested
+ + ++ + + + ++ +
(c) Unit Testing Procedure: Unit tests can be designed before coding begins or after the
code is developed. Review of this design information guides the creation of test cases,
which are used to detect errors in various units. Since a component is not an independent
program, two modules, drivers and stubs are used to test the units independently. Driver is
a module that passes input to the unit to be tested. It accepts test case data and then passes
the data to the unit being tested. After this, driver prints the output produced. Stub is a
module that works as unit referenced by the unit being tested. It uses the interface of the
subordinate unit, does minimum data manipulation, and returns control back to the unit
being tested.
Test Cases
Output Driver
Unit to be
Stub Stub
Tested
Note: Drivers and stubs are not delivered with the final software product. Thus, they represent
an overhead.
Tested
Components
NOTES
Integrated Modules
The big bang approach and incremental integration approach are used to integrate modules
of a program. In big bang approach, initially, all modules are integrated and then the entire
program is tested. However, when the entire program is tested, it is possible that a set of
errors is detected. It is difficult to correct these errors since it is difficult to isolate the exact
cause of the errors when program is very large. In addition, when one set of errors is
corrected, new sets of errors arise and this process continues indefinitely.
To overcome the above problem, incremental integration is followed. This approach tests
program in small increments. It is easier to detect errors in this approach because only a
small segment of software code is tested at a given instance of time. Moreover, interfaces
can be tested completely if this approach is used. Various kinds of approaches are used for
performing incremental integration testing, namely, top-down integration testing, bottom-
up integration testing, regression testing, and smoke testing.
(a) Top-down Integration Testing : In this testing, software is developed and tested by
integrating the individual modules, moving downwards in the control hierarchy. In top-
down integration testing, initially only one module known as the main control module is
tested. After this, all the modules called by it are combined with it and tested. This process
continues till all the modules in the software are integrated and tested.
It is also possible that a module being tested calls some of its subordinate modules. To
simulate the activity of these subordinate modules, a stub is written. Stub replaces modules
that are subordinate to the module being tested. Once, the control is passed to the stub, it
does minimal data manipulation, provides verification of entry, and returns control back to
the module being tested.
A1
A2 A3 A4
A5 A6 A8
A7
To perform top-down integration testing, a number of steps are followed, which are listed
below:
1. The main control module is used as a test driver and stubs are used to replace all the
other modules, which are directly subordinate to the main control module.
2. Subordinate stubs are then replaced one at a time with actual modules. The manner in
which the stubs are replaced depends on the approach (depth first or breadth first)
used for integration.
170 Self-Instructional Material
3. Every time a new module is integrated, tests are conducted. Software Testing
4. After tests are complete, another stub is replaced with the actual module.
5. Regression testing is conducted to ensure that no new errors are introduced.
Top-down integration testing uses either depth-first integration or breath-first integration NOTES
for integrating the modules. In depth-first integration, the modules are integrated starting
from left and then moves down in the control hierarchy. As shown in Figure 5.12(a),
initially, modules ‘A1’, ‘A2’, ‘A5’ and ‘A7’ are integrated. Then, module ‘A6’ integrates
with module ‘A2’. After this, control moves to the modules present at the centre of control
hierarchy, that is, module ‘A3’ integrates with module ‘A1’ and then module ‘A8’ integrates
with module ‘A3’. Finally, the control moves towards right, integrating module ‘A4’ with
module ‘A1’.
A1
A2 A3 A4
A5 A6 A8
A7
In breadth-first integration, initially, all modules at the first level are integrated moving
downwards, integrating all modules at the next lower levels. As shown in Figure 5.12 (b),
initially, modules ‘A2’, ‘A3’, and ‘A4’ are integrated with module ‘A1’ and then it moves
down integrating modules ‘A5’ and ‘A6’ with module ‘A2’ and module ‘A8’ with module
‘A3’. Finally, module ‘A7’ is integrated with module ‘A5’.
The various advantages and disadvantages associated with top-down integration are listed
in Table 5.4.
Advantages Disadvantages
• Behaviour of modules at high level is verified • Delays the verification of behaviour of modules
early. present at lower levels.
• None or only one driver is required. • Large numbers of stubs are required in case the
lowest level of software contains many func-
tions.
• Modules can be added one at a time with each • Since stubs replace modules present at lower
step. levels in the control hierarchy, no data flows
upward in program structure. To avoid this,
• Supports both breadth-first method and depth-
tester has to delay many tests until stubs are
first method.
replaced with actual modules or has to integrate
software from the bottom of the control hierar-
chy moving upward.
• Modules are tested in isolation from other • Module cannot be tested in isolation from other
modules. modules because it has to invoke other modules.
(b) Bottom-up Integration Testing: In this testing, individual modules are integrated starting
from the bottom and then moving upwards in the hierarchy. That is, bottom-up integration
testing combines and tests the modules present at the lower levels proceeding towards the
modules present at higher levels of control hierarchy. Self-Instructional Material 171
Software Engineering Some of the low-level modules present in software are integrated to form clusters or builds
(collection of modules). After clusters are formed, a driver is developed to co-ordinate test
case input and output and then, the clusters are tested. After this, drivers are removed and
clusters are combined moving upwards in the control hierarchy.
NOTES Figure 5.13 shows modules, drivers, and clusters in bottom-up integration. The low-level
modules ‘A4’, ‘A5’, ‘A6’, and ‘A7’ are combined to form cluster ‘C1’. Similarly, modules
‘A8’, ‘A9’, ‘A10’, ‘A11’, and ‘A12’ are combined to form cluster ‘C2’. Finally, modules
‘A13’ and ‘A14’ are combined to form cluster ‘C3’. After clusters are formed, drivers are
developed to test these clusters. Drivers ‘D1’, ‘D2’, and ‘D3’ test clusters ‘C1’, ‘C2’, and
‘C3’ respectively. Once these clusters are tested, drivers are removed and clusters are
integrated with the modules. Cluster ‘C1’ and cluster ‘C2’ are integrated with module
‘A2’. Similarly, cluster ‘C3’ is integrated with module ‘A3’. Finally, both the modules ‘A2’
and ‘A3’ are integrated with module ‘A1’.
A1
A2 A3
D1 D2 D3
A4 A5 A8 A9 A13
C3
Cluster
C1 A10 A11 A12 A14
A6
Cluster
C2
A7 Cluster
The various advantages and disadvantages associated with bottom-up integration are listed
in Table 5.5.
Advantages Disadvantages
(c) Regression Testing: Software undergoes changes every time a new module is added
as part of integration testing. Changes can occur in the control logic or input/output media,
and so on. It is possible that new data flow paths are established as a result of these
changes, which may cause problems in the functioning of some parts of the software that
was previously working perfectly. In addition, it is also possible that new errors may
surface during the process of correcting existing errors. To avoid these problems, regression
testing is used.
Unit added
during Integration
Testing
NOTES
Integration
Testing
Unit Tested
Components
being Integrated
Regression testing ‘re-tests’ the software or part of it to ensure that no previously working
components, functions, or features fail as a result of the error correction process and
integration of modules. Regression testing is considered an expensive but a necessary
activity since it is performed on modified software to provide knowledge that changes do
not adversely affect other system components. Thus, regression testing can be viewed as
a quality control tool that ensures that the newly modified code still complies with its
specified requirements and that unmodified code has not been affected by the change. For
instance, suppose a new function is added to the software, or a module is modified to
improve its response time. The changes may introduce errors into the software that was
previously correct. For example, suppose part of the code written below works properly.
x=b+1;
proc (z) ;
b = x + 2 ; x = 3;
Now suppose in an attempt to optimise the code it is transformed into the following:
proc (z) ;
b=b+3;
x=3;
This may result in an error if procedure ‘proc’ accesses variable ‘x’. Thus, testing should
be organised with the purpose of verifying possible degradations of correctness or other
qualities due to later modifications. During regression testing, existing test cases are executed
on the modified software so that errors can be detected. Test cases for regression testing
consist of three different types of tests, which are listed below:
• Tests that are used to execute software function.
• Tests that check the function, which is likely to be affected by changes.
• Tests that check software modules that have already been modified.
The various advantages and disadvantages associated with regression testing are listed in
Table 5.6.
Advantages Disadvantages
(d) Smoke Testing Smoke testing is defined as a subset of all defined test cases that cover
the main functionality of a component or system, to ascertain that the most crucial functions
of a program work properly. It is mainly used for time critical software and allows the
development team to assess the software frequently.
Smoke testing is performed when software is under development. As the modules of
software are developed, they are integrated to form a ‘cluster’. After the cluster is formed,
certain tests are designed to detect errors that prevent the cluster to perform its function.
Next, the cluster is integrated with other clusters thereby leading to the development of the
entire software, which is smoke tested frequently. A smoke test should possess the following
characteristics:
• Should run quickly.
• Should try to cover a large part of software and if possible the entire software.
• Should be easy for testers to perform smoke testing on software.
• Should be able to detect all errors present in the cluster being tested.
• Should try to find showstopper errors.
Generally, smoke testing is conducted every time a new cluster is developed and integrated
with the existing cluster. Smoke testing takes minimum time to detect errors that occur due
to integration of clusters. This reduces the risk associated with the occurrence of problems,
such as introduction of new errors in software. A cluster cannot be sent for further testing
unless smoke testing is performed on it. Thus, smoke testing determines whether the
cluster is suitable to be sent for further testing or not. Other benefits associated with smoke
testing are listed below:
• Minimises the risks, which are caused due to integration of different
modules: Since smoke testing is performed frequently on software, it allows the testers
to uncover errors as early as possible, thereby reducing the chance of causing severe
impact on the schedule when there is delay in uncovering errors.
• Improves quality of final software: Since smoke testing detects both functional and
architectural errors as early as possible, they are corrected early, thereby resulting in
high quality software.
• Simplifies detection and correction of errors: As smoke testing is performed almost
every time a new code is added, it becomes clear that the probable cause of errors is the
new code.
• Assesses progress easily: Since smoke testing is performed frequently, it keeps track
of the continuous integration of modules, that is, the progress of software development.
This boosts the morale of software developers.
Figure 5.15 shows integration test documentation. This template comprises of various
sections, which are listed below:
• Scope of testing: Provides overview of the specific functional, performance, and design
characteristics that are to be tested. In addition, scope describes the completion criteria
for each test phase and keeps record of the constraints that occur in the schedule.
• Test plan: Describes the strategy for integration of software. Testing is divided into
phases and builds. Phases describe distinct tasks that involve various sub-tasks. On the
other hand, builds are group of modules that correspond to each phase. Both phases and
builds address specific functional and behavioural characteristics of the software. Some
of the common test phases that require integration testing include user interaction, data
manipulation and analysis, display outputs, database management, and so on. Every test
phase consists of a functional category within the software. Generally, these phases can
be related to a specific domain within the architecture of software. The criteria commonly
considered for all test phases include interface integrity, functional validity, information
content, and performance.
Note that a test plan should be customised to local requirements, however it should
contain an integration strategy (in the Test Plan) and testing details (in Test Procedure).
Test plan should also include the following:
A schedule for integration, which should include the start and end dates given for
each phase.
A description of overhead software, concentrating on those that may require special
effort.
A description of the testing environment.
• Test procedure ‘n’: Describes the order of integration and unit tests for modules.
Order of integration provides information about the purpose and the modules to be
tested. Unit tests are conducted for the modules that are built along with the description
of tests for these modules. In addition, test procedure describes the development of
overhead software, expected results during integration testing, and description of test
case data. The test environment and tools or techniques used for testing are also mentioned
in test procedure.
Advantages Disadvantages
Gives user an opportunity to ensure that software Although, users provide a valuable
meets user requirements, before actually accepting feedback, they do not have a detailed
it from the developer. knowledge of software code.
Enables both users and software developers to Since testing is not users’ primary
identify and resolve problems in software. occupation so they may fail to observe or
accurately report some software failures.
Determines the readiness (state of being ready to
operate) of software to perform operations.
Decreases the possibility of software failure to a
large extent.
Since the software is intended for large number of users, it is not possible to perform
acceptance testing with all the users. Therefore, organisations engaged in software
development use alpha and beta testing as a process to detect errors by allowing a limited
number of users to test the software.
(a) Alpha Testing: Alpha testing is conducted by the users at the developer’s site. In other
words, this testing assesses the performance of software in the environment in which it is
developed. On completion of alpha testing, users report the errors to software developers
so that they can correct them. Note that alpha testing is often employed as a form of
internal acceptance testing.
Software
NOTES
(b) Beta Testing: Beta testing assesses performance of software at user’s site. This testing
is ‘live’ testing and is conducted in an environment, which is not controlled by the developer.
That is, this testing is performed without any interference from the developer. Beta testing
is performed to know whether the developed software satisfies the user requirements and
fits within the business processes or not.
Note that beta testing is often employed as a form of external acceptance testing in order to
acquire feedback from the ‘market’. Often limited public tests known as beta-versions
are released to groups of people so that further testing can ensure that the end product has
few faults or bugs. Sometimes, beta-versions are made available to the open public to
increase the feedback.
The advantages of beta testing are listed below:
• Evaluates the entire documentation of software. For example, it examines the detailed
description of software code, which forms a part of documentation of software.
Check Your Progress
• Checks whether software is operating successfully in user environment or not.
9. What is unit testing?
10. Explain top-down and
bottom-up integration 5.6 TESTING TECHNIQUES
testing.
11. Why is integration test Once the software is developed it should be tested in a proper manner before the system is
document maintained? delivered to the user. For this, two techniques that provide systematic guidance for designing
12. Define system testing and tests are used. These techniques are listed below:
its various types.
13. Define validation testing • Once the internal working of software is known, tests are performed to ensure that all
and its various types. internal operations of software are performed according to specifications. This is referred
to as white box testing.
NOTES
Branches
ts
en
gm
Paths
Se
White Box
Testing
ns
tio
di
ops
n
Co
Lo
The various advantages and disadvantages associated with white box testing are listed
in Table 5.8.
Self-Instructional Material 181
Software Engineering Table 5.8 Advantages and Disadvantages of White Box Testing
Advantages Disadvantages
Covers the larger part of the program code while Tests that cover most of the program code may
NOTES testing. not be good for assessing the functionality of
surprise (unexpected) behaviours and other
testing goals.
Uncovers typographical errors. Tests based on design may miss other system
problems.
Detects design errors that occur when incorrect Tests cases need to be changed if implementation
assumptions are made about execution paths. changes.
The effectiveness of white box testing is commonly expressed in terms of test or code
coverage metrics, which measure the fraction of code exercised by test cases. The various
types of testing, which occur as part of white box testing are basis path testing, control
structure testing, and mutation testing.
Co
the software tester to generate test cases in order
ntr
ting
to develop a logical complexity measure of a
ol
component-based design (procedural design). This
Tes
Str
measure is used to specify the basis set of
uc
th
tur
execution paths. Here, logical complexity refers
Pa
White Box
eT
to the set of paths required to execute all statements
sis
Testing
es
Ba
present in the program. Note that test cases are
ting
generated to make sure that every statement in a
program has been executed at least once.
Mutation Testing
Creating Flow Graph Flow graph is used to show Figure 5.21 Types of White Box Testing
the logical control flow within a program. To
represent the control flow, flow graph uses a notation which is shown in Figure 5.22.
Flow graph uses different symbols, namely, circles and arrows to represent various
statements and flow of control within the program. Circles represent nodes, which are
used to depict the procedural statements present in the program. A series of process boxes
and a decision diamond in a flow chart can be easily mapped into a single node. Arrows
represent edges or links, which are used to depict the flow of control within the program.
It is necessary for every edge to end in a node irrespective of whether it represents a
procedural statement or not. In a flow graph, area bounded by edges and nodes is known
as a region. While counting regions, the area outside the graph is also considered as a
region. Flow graph can be easily understood with the help of a diagram. For example, in
Figure 5.23(a) a flow chart has been depicted, which has been represented as a flow graph
182 Self-Instructional Material in Figure 5.23(b).
Software Testing
1
1
Nodes
2
R3
2
NOTES
R4
3 7
R1
3 7
4 R2 5
4 5
6
6 Regions
8
8
Edges
9
9
Note that a node that contains a condition is known as predicated node, which contains
one or more edges emerging out of it. For example, in Figure 5.23(b), node 2 and node 3
represent the predicated nodes.
Finding Independent Paths: A path through the program, which specifies a new condition
or a minimum of one new set of processing statements, is known as an independent path.
For example, in nested ‘if’ statements there are several conditions that represent independent
paths. Note that a set of all independent paths present in the program is known as basis set.
A test case is developed to ensure that all the statements present in the program are executed
at least once during testing. For example, all the independent paths in Figure 5.23(b) are
listed below:
P1: 1 – 9
P2: 1 – 2 – 7 – 8 – 1 – 9
P3: 1 – 2 – 3 – 4 – 6 – 8 – 1 – 9
P4: 1 – 2 – 3 – 5 – 6 – 8 – 1 – 9
where ‘P1’, ‘P2’, ‘P3’, and ‘P4’ represents different independent paths present in the
program.
The number of independent paths present in the program is calculated using cyclomatic
complexity, which is defined as the software metric that provides quantitative measure of
the logical complexity of a program. This software metric also provides information about
the number of tests required to ensure that all statements in the program are executed at
least once.
Cyclomatic complexity can be calculated by using any of the three methods listed below:
1. The total number of regions present in the flow graph of a program represents the
cyclomatic complexity of the program. For example, in Figure 5.23(b), there are four
regions represented by ‘R1’, ‘R2’, ‘R3’, and ‘R4’, hence, the cyclomatic complexity
is four.
2. Cyclomatic complexity can be calculated according to the formula given below:
CC = E – N + 2
Self-Instructional Material 183
Software Engineering where, ‘CC’ represents the cyclomatic complexity of the program, ‘E’ represents the
number of edges in the flow graph, and ‘N’ represents the number of nodes in the flow
graph. For example, in Figure 5.23(b), ‘E’ = ‘11’, ‘N’ = ‘9’. Therefore, CC = 11 – 9
+ 2 = 4.
NOTES 3. Cyclomatic complexity can be also calculated according to the formula given below:
CC = P + 1
where ‘P’ is the number of predicate nodes in the flow graph. For example, in Figure
5.23(b), P = 3. Therefore, CC = 3 + 1 = 4.
Note: Cyclomatic complexity can be calculated manually for small program suites, but
automated tools are preferred for most operational environments.
Deriving Test Cases: In this, basis path testing is presented as a series of steps and test
cases are developed to ensure that all statements present in the program are executed
during testing. While performing basis path testing, initially the basis set (independent
paths in the program) is derived. The basis set can be derived using the steps given below:
1. Draw the flow graph of the program: A flow graph is constructed using symbols
previously discussed. For example, a program to find the greater of two numbers is
listed below:
procedure greater; 1
integer: a, b, c = 0;
1 enter the value of a; 2
R2
2 enter the value of b;
3
3 if a > b then
4 c = a; 4 R1 5
else
6
5 c = b;
6 end greater Figure 5.24 Flow Graph
to Find the Greater
Flow graph for the above program is shown in Figure 5.24. Between Two Numbers
2. Determine the cyclomatic complexity of the program using flow graph: The cyclomatic
complexity for flow graph depicted in 6.26 can be calculated as follows:
CC = 2 regions
Or
CC = 6 edges – 6 nodes + 2 = 2
Or
CC = 1 predicate node + 1 = 2
3. Determine all the independent paths present in the program using flow graph: For the
flow graph shown in Figure 5.24, the independent paths are listed below:
P1 = 1 – 2 – 3 – 4 – 6
P2 = 1 – 2 – 3 – 5 – 6
4. Prepare test cases: Test cases are prepared to implement the execution of all the
independent paths in the basis set. Each test case is executed and compared with the
desired results.
Generating Graph Matrix: Graph matrix is used to develop a software tool that in turn
helps in carrying out basis path testing. Graph matrix can be defined as a data structure,
184 Self-Instructional Material
which represents the flow graph of a program in a tabular form. This matrix is also used to Software Testing
evaluate the control structures present in the program during testing.
Graph matrix consists of rows and columns that represent nodes present in the flow graph.
Note that the size of graph matrix is equal to the number of nodes present in the flow graph.
Every entry in the graph matrix is assigned some value known as link weight. Adding link NOTES
weights to each entry makes graph matrix a useful tool for evaluating the control structure
of the program during testing.
Flow graph shown in the Figure 5.25(a) is depicted as a graph matrix in Figure 5.25(b). In
Figure 5.25(a), numbers are used to identify each node in a flow graph, while letters are
used to identify edges in a flow graph. In Figure 5.25(b), a letter entry is made when there
exists a connection between two nodes in the flow graph. For example, node 3 is connected
to the node 6 by edge ‘d’ and node 4 is connected to node 2 by edge ‘c’, and so on.
1 2 3 4 5 6 7 8
1
1 a b i
a b i
2 c
2 3
c 3
7
4 e f
4 d
e l 5 g
j
6 d j
5 6
7
g n
8 h
8
(b) Control Structure Testing: Control structure testing is used to enhance the coverage
area by testing various control structures (which include logical structures and loops)
present in the program. Note that basis path testing is used as one of the techniques for
control structure testing. The various types of testing performed under control structure
testing are condition testing, data flow testing, and loop testing.
Condition Testing: Condition testing is a test case design method, which ensures that the
logical conditions and decision statements are free from errors. The errors present in
logical conditions can be incorrect Boolean operators, missing parenthesis in a Boolean
expression, error in relational operators, arithmetic expressions, and so on.
The common types of logical conditions that are tested using condition testing are listed
below:
• A relational expression, such as ‘E1 op E2’, where ‘E1’ and ‘E2’ are arithmetic
expressions and ‘op’ is an operator.
• A simple condition, such as any relational expression preceded by a ‘NOT’ (~) operator.
For example, (~ E1), where ‘E1’ is an arithmetic expression and ‘~’ represents ‘NOT’
operator.
• A compound condition, which is composed of two or more simple conditions, Boolean
operators, and parenthesis. For example, (E1 & E2) | (E2 & E3), where ‘E1’, ‘E2’, and
‘E3’ are arithmetic expressions and ‘&’ and ‘|’ represents ‘AND’ and ‘OR’ operators.
• A Boolean expression consisting of operands and a Boolean operator, such as ‘AND’,
‘OR’, ‘NOT’. For example, ‘A | B’ is a Boolean expression, where ‘A’ and ‘B’ are
operands and ‘|’ represents ‘OR’ operator.
Self-Instructional Material 185
Software Engineering Condition testing is performed using different strategies, namely, branch testing, domain
testing, and branch and relational operator testing. Branch testing executes each branch
(like ‘if’ statement) present in the module of a program at least once to detect all the errors
present in the branch. Domain testing tests relational expressions present in a program.
NOTES For this, domain testing executes all statements of the program that contain relational
expressions. Branch and relational operator testing tests the branches present in the
module of a program using condition constraints. For example,
if a > 10
then
print big
In this case, branch and relational operator testing verifies that the output produced by the
execution of the above code is ‘big’ only if the value of variable ‘a’ is greater than ‘10’.
Data Flow Testing: Data flow testing is a test design technique in which test cases are
designed to execute definition and uses of variables in the program. This testing ensures
that all variables are used properly in a program. To specify test cases, data flow based
testing uses information, such as location at which the variables are defined and used in the
program.
To perform data flow based testing, a definition-use graph is constructed by associating
variables with nodes and edges in the control flow graph. Once these variables are attached
with nodes and edges of control flow graph, test cases can easily determine which variable
is used in which part of a program and how data is flowing in the program. Thus, data flow
of a program can be tested easily using specified test cases.
Loop Testing: Loop testing is used to check the validity of loops present in the program
modules. Generally, there exist four types of loops, which are listed below:
• Simple loops: Refers to a loop that has no other loops in it. Consider a simple loop of
size ‘n’. Size ‘n’ of the loop indicates that the loop can be traversed ‘n’ times, that is, ‘n’
passes are made through the loop. To test simple loops, a number of steps are followed,
which are listed below:
1. Skip the entire loop.
2. Traverse the loop only once.
3. Traverse the loop two times.
4. Make ‘a’ passes through the loop, where ‘a’ is a number less than the size of loop ‘n’.
5. Traverse the loop n – 1, n, n + 1 times.
• Nested loops: Loops within loops are known as nested loops. While testing nested loops,
number of tests increases as the level of nesting increases. The steps followed for
testing nested loops are listed below:
1. Start with the inner loop and set values of all the outer loops to minimum.
2. Test the inner loop using the steps followed for testing simple loops while holding the
outer loops at their minimum parameter values. Add other tests for values that are
either out-of-range or are eliminated.
3. Move outwards, conducting tests for the next loop. However, keep the nested loops
to ‘typical’ values and outer loops at their minimum values.
4. Continue testing until all loops are tested.
• Concatenated loops: Refers to the loops which contain several loops that may or may
not depend on each other. If the loops are independent from each other, then steps in
186 Self-Instructional Material
simple loops are followed. Otherwise, if the loops are dependent on each other, then Software Testing
steps in nested loops are followed.
• Unstructured loops: This type of loop should be redesigned so that the use of structured
programming constructs can be reflected.
NOTES
Simple Loop
Nested Loop
Concatenated
Loop
Unstructured
Loop
(c) Mutation Testing Mutation testing is a white box method where errors are ‘purposely’
inserted into a program (under test) to verify whether the existing test case is able to detect
the error or not. In this testing, mutants of the program are created by making some
changes in the original program. The objective is to check whether each mutant produces
an output that is different from the output produced by the original program.
In mutation testing, test cases that are able to ‘kill’ all the mutants should be developed.
This is accomplished by testing mutants with the developed set of test cases. There can be
two possible outcomes when the test cases test the program, either the test case detects
the faults or fails to detect faults. If faults are detected, then necessary measures are taken
to correct them.
When no faults are detected, it implies that either the program is absolutely correct or the
test case is inefficient to detect the faults. Therefore it can be said that mutation testing is
performed to check the effectiveness of a test case. That is, if a test case is able to detect
these ‘small’ faults (minor changes) in a program, then it is likely that the same test case
will be equally effective in finding real faults.
To perform mutation testing, a number of steps are followed, which are listed below:
1. Create mutants of a program.
2. Check both program and its mutants using test cases.
3. Find the mutants that are different from the main program. A mutant is said to be
different from the main program if it produces an output, which is different from the
output produced by the main program.
4. Find mutants that are equivalent to the program, that is, the mutants that produce same
outputs as produced by the program.
Self-Instructional Material 187
Software Engineering 5. Calculate the mutation score using the formula given below:
(M = D/N – E)
where, M = Mutation score
NOTES N = Total number of mutants of the program.
D = Number of mutants different from the main program.
E = Total number of mutants that are equivalent to the main program.
6. Repeat steps 1 to 5 till the mutation score is ‘1’.
Mutant A6
Mutant A5
Mutant A4
Mutant A3
Mutant A2
Program Mutant Mutant A1
Generation
Test Case
Test Execution
However, mutation testing is very expensive to run on large programs. Thus, certain tools
are used to run mutation tests on large programs. For example, ‘Jester’ is used to run
mutation tests on java code. This tool targets the specific areas of program code, such as
changing constants and Boolean values.
Requirements
Input
Output
Events
es
Testing
Valu
Illeg
In this testing, various inputs are exercised and the outputs
ary
are compared against specification to validate the
lV
nd
al
es u
u
correctness. Note that test cases are derived from these Bo
specifications without considering implementation details
of the code. The outputs are compared with user
requirements and if they are as specified by the user, then Figure 5.29 Types of Error
Detection in Black Box
the software is considered to be correct, else the software
Testing
is tested for the presence of errors in it.
The various advantages and disadvantages associated with black box testing are listed in
Table 5.9.
Advantages Disadvantages
Tester requires no knowledge of implementation Only small number of possible inputs can be
and programming language used. tested as testing every possible input consumes
a lot of time.
Reveals any ambiguities and inconsistencies in There can be unnecessary repetition of test inputs
the functional specifications. if the tester is not informed about the test cases
that software developer has already tried.
Efficient when used on larger systems. Leaves many program paths untested.
Non-technical person can also perform black box Cannot be directed towards specific segments of
testing. code, hence is more error prone.
1 3
NOTES
4
An equivalence class depicts valid or invalid states for the input condition. An input condition
can be either a specific numeric value, a range of values, a Boolean condition, or a set of
values. Generally, guidelines that are followed for generating the equivalence classes are
listed below:
• If an input condition is Boolean, then there will be two equivalence classes: one valid and
one invalid class.
• If input consists of a specific numeric value, then there will be three equivalence classes:
one valid and two invalid classes.
• If input consists of a range, then there will be three equivalence classes: one valid and
two invalid classes.
• If an input condition specifies a member of a set, then there will be one valid and one
invalid equivalence class.
To understand equivalence class partitioning properly, let us consider an example. This
example is explained in series of steps listed below:
1. Suppose that a program ‘P’ takes an integer ‘X’ as input.
2. Now for this input we have ‘X’ < 0 and ‘X’ > 0.
3. If ‘X’ < 0 then program is required to perform task T1 and if X > 0 then task T2 is
performed.
4. The input domain is as large as ‘X’ and it can assume a large number of values.
Therefore the input domain (P) is partitioned into two equivalence classes and all test
inputs in the X < 0 and X > 0 equivalence classes are considered to be equivalent.
5. Now, as shown in Figure 5.32 independent test cases are developed for X < 0 and
X > 0.
Equivalence class
One Test case:
x = -3
X>=0
X<0
Equivalence class
Another Test case:
x = 15
(b) Boundary Value Analysis: Boundary value analysis (BVA) is a black box test design
technique where test cases are designed based on boundary values (that is, test cases are
designed at the edge of the class). Boundary value can be defined as an input value or
output value, which is at the edge of an equivalence partition or at the smallest incremental
190 Self-Instructional Material distance on either side of an edge, for example the minimum or maximum value of a range.
BVA is used since it has been observed that a large number of errors occur at the boundary Software Testing
of the given input domain rather than at the middle of the input domain. Note that boundary
value analysis complements the equivalence partitioning method. The only difference is
that in BVA, test cases are derived for both input domain and output domain while in
equivalence partitioning, test cases are derived only for input domain. NOTES
Generally, the test cases are developed in boundary value analysis using certain guidelines,
which are listed below:
• If input consists of a range of certain values, then test cases should be able to exercise
both the values at the boundaries of the range and the values that are just above and
below boundary values. For example, for the range – 0.5 ≤ X ≤ 0.5, the input values for
a test case can be ‘– 0.4’, ‘– 0.5’, ‘0.5’, ‘0.6’.
• If an input condition specifies a number of values, then test cases are generated to
exercise the minimum and maximum numbers and values just above and below these
limits.
• If input consists of a list of numbers, then the test case should be able to exercise the
first and the last elements of the list.
• If input consists of certain data structures (like arrays), then the test case should be able
to execute all the values present at the boundaries of the data structures, such as the
maximum and minimum value of an array.
(c) Orthogonal Array Testing: Orthogonal array testing can be defined as a mathematical
technique that determines the variations of parameters that need to be tested. This testing is
performed when limited data is to be given as input. Orthogonal array testing is useful in
finding errors in the software where incorrect logic is applied. Orthogonal array testing
provides a way to select tests that:
• Guarantee testing of pair wise combination of all selected variables.
• Create an efficient way to test all combinations of variables using fewer test cases as
compared to other black box testing methods, such as boundary value analysis,
equivalence class partitioning, and cause effect graphing.
• Create test cases that have even distribution of all pair wise combinations of variables in
orthogonal array.
• Execute complex combinations of all the variables.
To understand orthogonal array testing, it is important to understand orthogonal arrays,
which are two-dimensional arrays of numbers. In these arrays, if any two columns are
chosen then the complete distribution of pair-wise combination of values present in the
array can be obtained. To perform orthogonal array testing, follow the steps listed below:
1. Find all the independent variables that need to be tested for interaction. This gives the
factors present in the array.
2. Decide the maximum number of values that each independent variable follows. This
gives the number of levels present in the array.
3. Find an orthogonal array that has minimum number of runs. An orthogonal array with
the minimum number of runs is one that has maximum factors and at least as many
levels as decided for each factor.
4. Map factors and values on to the array.
5. Choose values for ‘left over’ levels, that is, the levels for which there in no value
mapped in the array.
6. Convert runs into test cases.
Self-Instructional Material 191
Software Engineering In the above steps, runs refer to the number of rows in the array. This directly translates
into the number of test cases that will be generated by the orthogonal analysis testing
technique. Factors refer to the number of columns in an array. This directly translates to
the maximum number of variables that can be handled by this array. Levels refer to the
NOTES maximum number of values that can be taken on by any single factor.
To understand orthogonal array testing properly, let us consider an example of a web page.
This web page consists of three sections, namely, top, middle, and bottom, these sections
can be individually shown or hidden from the users. According to the procedure of orthogonal
array testing, the interactions among different sections can be tested as follows:
• Factors = 3, as there are three sections in the web page.
• Levels = 2, as variables can have either hidden or visible state.
• Draw orthogonal array = 23, as there are two levels and three factors.
The left over levels = 0. Now generate test cases from each run. Four test cases are
generated to check the conditions listed below:
• Home page is displayed and all other sections are hidden.
• Home page and all other sections rather than top section are displayed.
• Home page and all other sections rather than middle section are displayed.
• Home page and all other sections rather than bottom section are displayed.
(d) Cause-Effect Graphing: Cause-effect graphing is a test design technique where test
cases are designed using cause-effect graphs. A cause-effect graph is a graphical
representation of inputs and/or stimuli (causes) with their associated outputs (effects),
which can be used to design test cases. Test cases are generated to test all the possible
combinations of inputs provided to the program being tested.
One of the major drawbacks of using equivalence partitioning and boundary value analysis
is that both these methods test every input given to a program independently. This drawback
is avoided in cause effect graphing where combinations of inputs are used instead of
individual inputs. To use cause effect graphing method, a number of steps are followed,
192 Self-Instructional Material which are listed below:
1. List the cause (input conditions) and effects (outputs) of the program. Software Testing
Logical Constraints
c1 e1 a
a a
Identity E I O
c1 e1 b
b b
not Only one
Exclusive c
Inclusive
c1
e1
c1 or
a a
R M
c1
b b
e1
Require Masks
c1 and
Cause Effect
C1: side x is less than the sum of sides y and z. E1: no triangle is formed.
C1
E1
NOTES C2
C3 E2
C4
NOT
E3 AND
C5 OR
3. A decision table (A table that shows a set of conditions and the actions resulting from
them) is drawn as shown in Table 5.12.
Conditions
C1: x < y + z 0 X X X X
C2: x = y = z X 1 X X X
C3: x = y X X 1 X X
C4: y = z X X X 1 X
C5: x = z X X X X 1
Purpose It is used to test the internal structure It is used to test the functionality
of software. of software. NOTES
It is concerned only with testing It is concerned only with testing
software and does not guarantee the specifications an d does not
complete implementation of all the guarantee that all the components
specifications mentioned in user of software that are implemented
requirements. are tested.
Test Cases Here test cases are generated based on Here the internal structure of
the actual code of the module to be modules or programs is not
tested. tjtjejjej jetjejt ejtjwejt ewktjekltj considered for selecting test cases.
tejt lek
Example The inner software present inside the In this testing, it is checked
calculator (which is known by the whether the calculator is working
developer only) is checked by giving properly or not by giving inputs
inputs to the code. the kthet ; thethj by pressing the buttons in the
calculator.
klthe kl
Test case design in object-oriented testing is based on the conventional methods, however,
these test cases should encompass special features so that they can be used in the object-
oriented environment. The points that should be noted while developing test cases in object-
oriented environment are listed below: NOTES
• Each test case should be uniquely identified and explicitly associated with the class to be
tested.
• The purpose of the test should be stated clearly.
• A list of testing steps should be developed for each test and should contain the following:
A list of specified states for the object that is to be tested.
A list of messages and operations, which will be exercised as a consequence of the
test.
A list of exceptions, which may occur as the object is tested.
A list of external conditions (changes in the environment external to the software)
that must exist in order to properly conduct the test.
Supplementary information that aids in understanding or implementing the test.
State-based UML-based
Testing Testing
Object-oriented
Testing Methods
Fault-based Scenario-
Testing based Testing
(a) State-based Testing : State-based testing is used to verify whether the methods (a
procedure that is executed by an object) of a class are interacting properly with each other
or not. This testing seeks to exercise the transitions among the states based upon the
identified inputs. For this, finite-state machine (FSM) or state-transition diagram is constructed
to represent the change of states that occur in the program under test.
For testing the methods, state-based testing generates test cases, which check whether the
method is able to change the state of object as expected or not. If any method of the class
Self-Instructional Material 197
Software Engineering is not able to change the state of object as expected, then the method is said to contain
errors.
To perform state-based testing, a number of steps are followed, which are listed below:
1. Derive a new class from an existing class with some additional features, which are
NOTES
used to examine and set the state of the object.
2. Next, test driver is written. This test driver contains a main program to create an
object, send messages to set the state of object, send messages to invoke methods
of the class that is being tested and send messages to check the final state of the
object.
3. Finally, stubs are written. These stubs call the untested methods.
(b) Fault-based Testing: In fault-based testing, test cases are developed to determine a set
of plausible faults. Here, the focus is on falsification. In this testing, tester does not focus
on a particular coverage of a program or its specification, but on concrete faults that
should be detected. The focus on possible faults enables testers to incorporate their expertise
in both the application domain and the particular system under test. Since testing can only
prove the existence of errors and not their absence, this testing approach is considered to
be an effective testing method and is hence often used when security or safety of a system
is to be tested.
Fault-based testing starts by examining the analysis and design model of object-oriented
software. These models provide an overview of the problems that can occur during
implementation of software. The faults occur in both operation calls and various types of
messages (like a message sent to invoke an object). These faults are unexpected outputs,
incorrect messages or operations, and incorrect invocation. The faults can be recognised
by determining the behaviour of all operations performed to invoke the methods of a class.
(c) Scenario-based Testing: Scenario-based testing is used to detect errors that are caused
due to incorrect specifications and improper interactions among various segments of the
software. Incorrect interactions often lead to incorrect outputs that can cause malfunctioning
of some segments of software. The use of scenarios in testing is a common way of
describing how a representative user might execute a task or achieve a goal within a specific
context or environment. Note that these scenarios are more context and user specific
instead of being product specific. Generally, the structure of a scenario includes the following:
• A condition under which the scenario runs.
• A goal to achieve, which can also be a name of the scenario.
• A set of steps of actions.
• An end condition at which the goal is achieved.
• A possible set of extensions written as scenario fragments.
Scenario-based testing combines all the classes that support a use case (scenarios are
subset of use cases) and executes a test case to test them. Execution of all the test cases
ensures that all methods in all the classes are executed at least once during testing. However,
it is difficult to test all the objects (present in the classes combined together) collectively.
Thus, rather than testing all objects collectively, they are tested using either top-down or
bottom-up integration approach.
This testing is considered to be the most effective method as in this method, scenarios can
be organised in such a manner that the most likely scenarios are tested first with unusual or
exceptional scenarios considered later in the testing process. This satisfies a fundamental
principle of testing that most testing effort should be devoted to those paths of the system
198 Self-Instructional Material that are mostly used.
Note: A use case collects all the scenarios together, specifying the manner in which the Software Testing
goal can succeed or fail.
INTERNAL ASSIGNMENT
TOTAL MARKS: 25
ASSIGNMENT SHEET
(To be attached with each Assignment)
________________________________________________________________________
Registration Number:
Total Marks:_____________/25
Remarks by Evaluator:__________________________________________________________________
__________________________________________________________________________________
Note: Please ensure that your Correct Registration Number is mentioned on the Assignment Sheet.
Date:_______________ Date:_______________