Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Software Engineering - Full - Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 85

CS1342: SOFTWARE ENGINEERING

SYLLABUS
Module I: Introduction: Evolution; Software life cycle models: A few basic concepts, Waterfall model
and its extension, Agile development models, Spiral model, Comparison of different life cycle models
Module II: Software Project Management, Project Planning, Metrics for project size estimations,
Project Estimation Techniques, Basic COCOMO model, Risk Management, Software Requirements
Analysis and Specification: Requirements gathering and analysis, Software Requirements
Specification
Module III: Software Design: overview of the design process, How to characterise a good software
design, Cohesion and Coupling, Approaches to software design, Function oriented design: Overview
of SA/SD Methodology, Structured analysis, Developing the DFD model of a system, Structured
Design, User Interface design: Characteristics of a good user interface, Basic concepts, Types of user
interfaces
Module IV: Coding and Testing: Coding, Code review, Software documentation, Testing, Unit testing,
Black box testing, white box testing: Basic concepts, Debugging Integration testing, system testing,
Software Reliability and quality management: Software reliability, Software quality, Software
maintenance: Characteristics of software maintenance, Software reverse engineering, Emerging
Trends: Client Server Software, Client Server architectures, CORBA, Service Oriented Architectures
(SOA), Software as a Service.

Software Engineering MODULE I


Software is more than just a program code. A program is an executable code,which
serves some computational purpose. Software is considered to be collection of executable
programming code, associated libraries and documentations. Software, when made for a
specific requirement is called software product.
Engineering on the other hand, is all about developing products, using well- defined,
scientific principles and methods.
Software engineering is an engineering branch associated with development of software
product using well-defined scientific principles, methods and procedures. The outcome of
software engineering is an efficient and reliable software product.

Need of Software Engineering

The need of software engineering arises because of higher rate of change in user
requirements and environment on which the software is working.

Large software - It is easier to build a wall than to a house or building, likewise, as the
size of software become large engineering has to step to give it a scientific process.
Scalability- If the software process were not based on scientific and engineering
concepts, it would be easier to re-create new software than to scale an existing one.
Cost- As hardware industry has shown its skills and huge manufacturing has lower
down the price of computer and electronic hardware. But the cost of software remains
high if proper process is not adapted.
Dynamic Nature- The always growing and adapting nature of software hugely
depends upon the environment in which user works. If the nature of software is always
changing, new enhancements need to be done in the existing one. This is where
software engineering plays a good role.
Quality Management- Better process of software development provides better and
quality software product.

Characteristics of good software


A software product can be judged by what it offers and how well it can be used. This software
must satisfy on the following grounds:

Operational
Transitional
Maintenance
Well-engineered and crafted software is expected to have the following
characteristics:

Operational

This tells us how well software works in operations. It can be measured on:
Budget
Usability
Efficiency
Correctness
Functionality
Dependability
Security
Safety

Transitional

This aspect is important when the software is moved from one platform to another:

Portability
Interoperability
Reusability
Adaptability

Maintenance

This aspect briefs about how well a software has the capabilities to maintain itself inthe ever-
changing environment:

Modularity
Maintainability
Flexibility
Scalability
In short, Software engineering is a branch of computer science, which uses well- defined
engineering concepts required to produce efficient, durable, scalable, in- budget and on-time
software products.

What is SDLC?

SOFTWARE DEVELOPMENT LIFECYCLE (SDLC) is a systematic process for building


software that ensures the quality and correctness of the software built. SDLC process aims
to produce high-quality software that meets customer expectations. The system development
should be complete in the pre-defined time frame and cost. SDLC consists of a detailed plan
which explains how to plan, build, and maintain specific software. Every phase of the SDLC
life cycle has its own process and deliverables that feed into the next phase. SDLC stands for
Software Development Lifecycle.

Why SDLC?

Here, are prime reasons why SDLC is important for developing a software system.
It offers a basis for project planning, scheduling, and estimating
Provides a framework for a standard set of activities and deliverables
It is a mechanism for project tracking and control
Increases visibility of project planning to all involved stakeholders of thedevelopment
process
Increased and enhance development speed
Improved client relations
Helps you to decrease project risk and project management plan overhead

SDLC Phases

The entire SDLC process divided into the following stages:

Phase 1: Requirement collection and analysis


Phase 2: Feasibility study:
Phase 3: Design:
Phase 4: Coding:
Phase 5: Testing:
Phase 6: Installation/Deployment:
Phase 7: Maintenance:

Classical Waterfall Model

Classical waterfall model is the basic software development life cycle model. It isvery
simple but idealistic. Earlier this model was very popular but nowadays it is not used. But it
is very important because all the other software development life cycle models are based on
the classical waterfall model.
Classical waterfall model divides the life cycle into a set of phases. This model considers that
one phase can be started after completion of the previous phase. Thatis the output of one
phase will be the input to the next phase. Thus the development process can be considered
as a sequential flow in the waterfall. Here the phases do not overlap with each other. The
different sequential phases of the classical waterfall
model are shown in the below figure:

Let us now learn about each of these phases in brief details:


1. Feasibility Study: The main goal of this phase is to determine whether it would be
financially and technically feasible to develop the software. The feasibility
study involves understanding the problem and then determinethe various possible
strategies to solve the problem. These different identified solutions are analyzed based
on their benefits and drawbacks, The best solution is chosen and all the other phases
are carried out as per this solution strategy.
2. Requirements analysis and specification: The aim of the requirement analysis and
specification phase is to understand the exact requirements of the customer and
document them properly. This phase consists of two different activities.

• Requirement gathering and analysis: Firstly all the requirementsregarding the


software are gathered from the customer and then the gathered requirements are
analyzed. The goal of the analysis part is to remove incompleteness (an incomplete
requirement is one in which some parts of the actual requirements have been
omitted) and inconsistencies (inconsistent requirement is one in which some part
of the requirement contradicts with some other part).
• Requirement specification: These analyzed requirements are documented in a
Software requirement specification (SRS) document. SRS document serves as
a contract between development team andcustomers. Any future dispute between
the customers and the developers can be settled by examining the SRS document.
Outcome of this phase is SRS
2. Design: The aim of the design phase is to transform the requirements specified in the
SRS document into a structure that is suitable for implementation in someprogramming
language.
3. Coding and Unit testing: In coding phase software design is translated into source code
using any suitable programming language. Thus each designed module is coded. The
aim of the unit testing phase is to check whether each module is working properly or not.
4. Integration and System testing: Integration of different modules areundertaken soon
after they have been coded and unit tested. Integration of various modules is carried out
incrementally over a number of steps. During each integration step, previously planned
modules are added to the partially integrated system and the resultant system is tested.
Finally, after all the modules have been successfully integrated and tested, the full
working system is obtained and system testing is carried out on this.
System testing consists three different kinds of testing activities as described below :
• Alpha testing: Alpha testing is the system testing performed by the development
team.
• Beta testing: Beta testing is the system testing performed by a friendly set of
customers.
• Acceptance testing: After the software has been delivered, the customer
performed the acceptance testing to determine whether to accept the delivered
software or to reject it.
5. Maintainence: Maintenance is the most important phase of a software lifecycle. The
effort spent on maintenance is the 60% of the total effort spent to develop a full software.
There are basically three types of maintenance :
• Corrective Maintenance: This type of maintenance is carried out to correct errors
that were not discovered during the product development phase.
• Perfective Maintenance: This type of maintenance is carried out to enhance the
functionalities of the system based on the customer’s request.
• Adaptive Maintenance: Adaptive maintenance is usually required for porting the
software to work in a new environment such as work on a new computer platform
or with a new operating system.
Advantages of Classical Waterfall Model
Classical waterfall model is an idealistic model for software development. It is very simple, so
it can be considered as the basis for other software development life cycle models. Below are
some of the major advantages of this SDLC model:
• This model is very simple and is easy to understand.
• Phases in this model are processed one at a time.
• Each stage in the model is clearly defined.
• This model has very clear and well undestood milestones.
• Process, actions and results are very well documented.
• Reinforces good habits: define-before- design,
design-before-code.
• This model works well for smaller projects and projects where requirements are well
understood.
Drawbacks of Classical Waterfall Model
Classical waterfall model suffers from various shortcomings, basically we can’t use it in real
projects, but we use other software development lifecycle models which are based on the
classical waterfall model. Below are some major drawbacks of this model:
• No feedback path: In classical waterfall model evolution of a software from one phase
to another phase is like a waterfall. It assumes that no error is ever committed by
developers during any phases. Therefore, it does not incorporate any mechanism for
error correction.
• Difficult to accommodate change requests: This model assumes that all the customer
requirements can be completely and correctly defined at the beginning of the project,
but actually customers’ requirements keep on changing with time. It is difficult to
accommodate any change requests after the requirements specification phase is
complete.
• No overlapping of phases: This model recommends that new phase can start only
after the completion of the previous phase. But in real projects, this can’tbe maintained.
To increase the efficiency and reduce the cost, phases may overlap.

Iterative Waterfall Model

• In a practical software development project, the classical waterfall model is hard to use.
So, Iterative waterfall model can be thought of as incorporating the necessary changes
to the classical waterfall model to make it usable in practical software development
projects. It is almost same as the classical waterfall model except some changes are
made to increase the efficiency of the software development.
• The iterative waterfall model provides feedback paths from every phase to its
preceding phases, which is the main difference from the classical waterfall
model.
Feedback paths introduced by the iterative waterfall model are shown in the figure below.
When errors are detected at some later phase, these feedback paths allow correcting errors
committed by programmers during some phase. The feedback paths allow the phase to be
reworked in which errors are committed and these changes are reflected in the later phases.
But, there is no feedback path to the stage
– feasibility study, because once a project has been taken, does not give up the project
easily.
It is good to detect errors in the same phase in which they are committed. It reduces the effort
and time required to correct the errors.
Phase Containment of Errors: The principle of detecting errors as close to theirpoints of
commitment as possible is known as Phase containment of errors.
Advantages of Iterative Waterfall Model
• Feedback Path: In the classical waterfall model, there are no feedback paths, so there
is no mechanism for error correction. But in iterative waterfall model feedback path from
one phase to its preceding phase allows correcting the errors that are committed and
these changes are reflected in the later phases.
• Simple: Iterative waterfall model is very simple to understand and use. That’s why it is
one of the most widely used software development models.
Drawbacks of Iterative Waterfall Model
• Difficult to incorporate change requests: The major drawback of the iterativewaterfall
model is that all the requirements must be clearly stated before starting of the
development phase. Customer may change requirements after some time but the
iterative waterfall model does not leave any scope to incorporate change requests that
are made after development phase starts.
• Incremental delivery not supported: In the iterative waterfall model, the full software
is completely developed and tested before delivery to the customer. There is no scope
for any intermediate delivery. So, customers have to wait long for getting the software.
• Overlapping of phases not supported: Iterative waterfall model assumes that one
phase can start after completion of the previous phase, But in real projects,phases may
overlap to reduce the effort and time needed to complete the project.
• Risk handling not supported: Projects may suffer from various types of risks. But,
Iterative waterfall model has no mechanism for risk handling.
• Limited customer interactions: Customer interaction occurs at the start of theproject
at the time of requirement gathering and at project completion at the time of software
delivery. These fewer interactions with the customers may lead to many problems as
the finally developed software may differ from thecustomers’ actual requirements.
Prototyping Model

The Prototyping model is also a popular software development life cycle model. The
prototyping model can be considered to be an extension of the Iterative Waterfall model. This
model suggests building a working Prototype of the system, before the development of the
actual software.
A prototype is a toy and crude implementation of a system. It has limited functional
capabilities, low reliability, or inefficient performance as compared to the actual
software. A prototype can be built very quickly by using several shortcuts by
developing inefficient, inaccurate or dummy functions.
Necessity of the Prototyping Model –
• It is advantageous to develop the Graphical User Interface (GUI) part of a software
using the Prototyping Model. Through prototype, the user can experiment with a
working user interface and they can suggest any change ifneeded.
• The prototyping model is especially useful when the exact technical solutions are
unclear to the development team. A prototype can help them to critically examine the
technical issues associated with the product development. The lack of familiarity with
a required development technology is a technical risk. This can be resolved by
developing a prototype to understand the issues andaccommodate the changes in
the next iteration.
Phases of Prototyping Model –
The Prototyping Model of software development is graphically shown in the figure below.
The software is developed through two major activities – one is prototype construction and
another is iterative waterfall based software development.
Prototype Development – Prototype development starts with an initial requirementsgathering
phase. A quick design is carried out and a prototype is built. The developed prototype is
submitted to the customer for evaluation. Based on the customer feedback, the requirements
are refined and the prototype is suitably modified. This cycle of obtaining customer feedback
and modifying the prototype continues till the customer approves the prototype.
Iterative Development – Once the customer approves the prototype, the actual software is
developed using the iterative waterfall approach. In spite of the availability of a working
prototype, the SRS document is usually needed to be developed since the SRS Document is
invaluable for carrying out tractability analysis, verification and test case design during later
phases.
The code for the prototype is usually thrown away. However, the experience gathered from
developing the prototype helps a great deal in developing the actual software. By constructing
the prototype and submitting it for user evaluation, many customer requirements get properly
defined and technical issues get resolved by experimenting with the prototype. This minimises
later change requests from the customer and the associated redesign costs.
Advantages of Prototyping Model – This model is most appropriate for the projects that
suffer from technical and requirements risks. A constructed prototype helps to overcome these
risks.

Disadvantages of Prototyping Model –


• Cost of the development of the software by using prototyping model can increase in
various cases where the risks are very less.
• It may take more time to develop a software by using Prototyping model.
• The Prototyping model is effective only for those projects for which the risks can be
identified before the development starts. Since the prototype is developed at the start of
the project, so the Prototyping model is ineffective for risks that identified after the
development phase starts.
Incremental process model

Incremental process model is also know as Successive version model.


First, a simple working system implementing only a few basic features is built and then that is
delivered to the customer. Then thereafter many successive iterations/ versions are
implemented and delivered to the customer until the desired system is released.

A, B, C are modules of Software Product that are incrementally developed anddelivered.


Life cycle activities –
Requirements of Software are first broken down into several modules that can be
incrementally constructed and delivered. At any time, the plan is made just for the next
increment and not for any kind of long term plans. Therefore, it is easier to modify the version
as per the need of the customer. Development Team first undertakes to develop core features
(these do not need services from other features)of the system.
Once the core features are fully developed, then these are refined to increase levels of
capabilities by adding new functions in Successive versions. Each incremental version is
usually developed using an iterative waterfall model of development.
As each successive version of the software is constructed and delivered, now the feedback
of the Customer is to be taken and these were then incorporated in the next version. Each
version of the software have more additional features over the previous ones.
After Requirements gathering and specification, requirements are then spitted into
several different versions starting with version-1, in each successive increment,
next version is constructed and then deployed at the customer site. After the last
version (version n), it is now deployed at the client site.
Types of Incremental model –
1. Staged Delivery Model – Construction of only one part of the project at a time.

2. Parallel Development Model – Different subsystems are developed at the


same time. It can decrease the calendar time needed for the development,
i.e. TTM (Time to Market), if enough Resources are available.

When to use this –


1. Funding Schedule, Risk, Program Complexity, or need for early realization
ofbenefits.
2. When Requirements are known up-front.
3. When Projects having lengthy developments schedules.
4. Projects with new Technology.
Advantages –
• Error Reduction (core modules are used by the customer from the beginning
ofthe phase and then these are tested thoroughly)
• Uses divide and conquer for breakdown of tasks.
• Lowers initial delivery cost.
• Incremental Resource Deployment.
Disadvantages –
• Requires good planning and design.
• Total cost is not lower.
• Well defined module interfaces are required.

Spiral Model

Spiral model is one of the most important Software Development Life Cycle
models, which provides support for Risk Handling. In its diagrammatic
representation, it looks like a spiral with many loops. The exact number of loops of
the spiral is unknown and can vary from project to project. Each loop of the spiral
is called a Phase of the software development process. The exact number of
phases needed to develop the product can be varied by the project manager
depending upon the project risks. As the project manager dynamically determines
the number ofphases, so the project manager has an important role to develop a
product using spiral model.
The Radius of the spiral at any point represents the expenses(cost) of the project
so far, and the angular dimension represents the progress made so far in the current
phase.
Below diagram shows the different phases of the Spiral Model:

Each phase of Spiral Model is divided into four quadrants as shown in the above
figure. The functions of these four quadrants are discussed below-

1. Objectives determination and identify alternative solutions:


Requirements are gathered from the customers and the objectives are
identified, elaborated and analyzed at the start of every phase. Then
alternative solutions possible for the phase are proposed in this quadrant.
2. Identify and resolve Risks: During the second quadrant all the possible
solutions are evaluated to select the best possible solution. Then the risks
associated with that solution is identified and the risks are resolved using
the best possible strategy. At the end of this quadrant, Prototype is built for
the bestpossible solution.
3. Develop next version of the Product: During the third quadrant, the
identified features are developed and verified through testing. At the end of
the third quadrant, the next version of the software is available.
4. Review and plan for the next Phase: In the fourth quadrant, the Customers
evaluate the so far developed version of the software. In the end, planning for
the next phase is started.
Risk Handling in Spiral Model
A risk is any adverse situation that might affect the successful completion of a
software project. The most important feature of the spiral model is handling these
unknown risks after the project has started. Such risk resolutions are easier done
by developing a prototype. The spiral model supports coping up with risks by
providing the scope to build a prototype at every phase of the software development.
Prototyping Model also support risk handling, but the risks must be identified
completely before the start of the development work of the project. But in real life
project risk may occur after the development work starts, in that case, we cannot
usePrototyping Model. In each phase of the Spiral Model, the features of the product
dated and analyzed and the risks at that point of time are identified and are resolved
through prototyping. Thus, this model is much more flexible compared to other
SDLCmodels.
Why Spiral Model is called Meta Model ?
The Spiral model is called as a Meta Model because it subsumes all the other SDLC
models. For example, a single loop spiral actually represents the Iterative Waterfall
Model. The spiral model incorporates the stepwise approach of the Classical
Waterfall Model. The spiral model uses the approach of Prototyping Model by
building a prototype at the start of each phase as a risk handling technique. Also,
the spiral model can be considered as supporting the evolutionary model – the
iterations along the spiral can be considered as evolutionary levels through which
the completesystem is built.
Advantages of Spiral Model: Below are some of the advantages of the Spiral
Model.
• Risk Handling: The projects with many unknown risks that occur as the
development proceeds, in that case, Spiral Model is the best development
model to follow due to the risk analysis and risk handling at every phase.
• Good for large projects: It is recommended to use the Spiral Model in large
and complex projects.
• Flexibility in Requirements: Change requests in the Requirements at later
phase can be incorporated accurately by using this model.
• Customer Satisfaction: Customer can see the development of the product
at the early phase of the software development and thus, they habituated with
the system by using it before completion of the total product.
Disdvantages of Spiral Model: Below are some of the main disadvantages of the
spiral model.
• Complex: The Spiral Model is much more complex than other SDLC models.
• Expensive: Spiral Model is not suitable for small projects as it is expensive.
• Too much dependable on Risk Analysis: The successful completion of the
project is very much dependent on Risk Analysis. Without very highly
experienced expertise, it is going to be a failure to develop a project using this
model.
• Difficulty in time management: As the number of phases is unknown at
thestart of the project, so time estimation is very difficult.

Agile Model

Agile SDLC model is a combination of iterative and incremental process models


with focus on process adaptability and customer satisfaction by rapid delivery of
working software product. Agile Methods break the product into small incremental
builds. These builds are provided in iterations. Each iteration typically lasts from
about one to three weeks. Every iteration involves cross functional teams working
simultaneously on various areas like −

• Planning
• Requirements Analysis
• Design
• Coding
• Unit Testing and
• Acceptance Testing.
At the end of the iteration, a working product is displayed to the customer and
important stakeholders.
Following are the Agile Manifesto principles −
• Individuals and interactions − In Agile development, self-organization and
motivation are important, as are interactions like co-location and pair
programming.
• Working software − Demo working software is considered the best means
of communication with the customers to understand their requirements,
instead of just depending on documentation.
• Customer collaboration − As the requirements cannot be gathered
completely in the beginning of the project due to various factors, continuous
customer interaction is very important to get proper product requirements.
• Responding to change − Agile Development is focused on quick
responsesto change and continuous development
Agile Vs Traditional SDLC Models
Agile is based on the adaptive software development methods, whereas the
traditional SDLC models like the waterfall model is based on a predictive approach.
Predictive teams in the traditional SDLC models usually work with detailed
planning and have a complete forecast of the exact tasks and features to be
delivered in the next few months or during the product life cycle.
Predictive methods entirely depend on the requirement analysis and
planning done in the beginning of cycle. Any changes to be incorporated go
through a strict change control management and prioritization.
Agile uses an adaptive approach where there is no detailed planning and there is
clarity on future tasks only in respect of what features need to be developed. There
is feature driven development and the team adapts to the changing product
requirements dynamically. The product is tested very frequently, through the
release iterations, minimizing the risk of any major failures in future.
Customer Interaction is the backbone of this Agile methodology, and open
communication with minimum documentation are the typical features of Agile
development environment. The agile teams work in close collaboration with each
other and are most often located in the same geographical location.
Agile Model - Pros and Cons
Agile methods are being widely accepted in the software world recently. However,
this method may not always be suitable for all products. Here are some pros and
cons of the Agile model.
The advantages of the Agile Model are as follows −
• Is a very realistic approach to software development.
• Promotes teamwork and cross training.
• Functionality can be developed rapidly and demonstrated.
• Resource requirements are minimum.
• Suitable for fixed or changing requirements
• Delivers early partial working solutions.
• Good model for environments that change steadily.
• Minimal rules, documentation easily employed.
• Enables concurrent development and delivery within an overall planned
context.
• Little or no planning required.
• Easy to manage.
• Gives flexibility to developers.
The disadvantages of the Agile Model are as follows −
• Not suitable for handling complex dependencies.
• More risk of sustainability, maintainability and extensibility.
• An overall plan, an agile leader and agile PM practice is a must without
whichit will not work.
• Strict delivery management dictates the scope, functionality to be
delivered,and adjustments to meet the deadlines.
• Depends heavily on customer interaction, so if customer is not clear,
teamcan be driven in the wrong direction.
• There is a very high individual dependency, since there is minimum
documentation generated.
• Transfer of technology to new team members may be quite challenging
dueto lack of documentation.
MODULE II

Software Project Planning

Project Planning and Project Estimation Techniques

Specific Instructional Objectives


At the end of this lesson the student would be able to:

• Identify the job responsibilities of a software project manager.


• Identify the necessary skills required in order to perform software project
management.
• Identify the essential activities of project planning.
• Determine the different project related estimates performed by a project
manager and suitably order those estimates.
• Explain what is meant by Sliding Window Planning.
• Explain what is Software Project Management Plan (SPMP).
• Identify and explain two metrics for software project size estimation.
• Identify the shortcomings of function point (FP) metric.
• Explain the necessity of feature point metric in the context of project size
estimation.
• Identify the types of project-parameter estimation technique.

Responsibilities of a software project manager


Software project managers take the overall responsibility of steering a project to success.
It is very difficult to objectively describe the job responsibilities of a project manager. The
job responsibility of a project manager ranges from invisible activities like building up team
morale to highly visible customer presentations. Most managers take responsibility for
project proposal writing, project cost estimation, scheduling, project staffing, software
process tailoring, project monitoring and control, software configuration management, risk
management, interfacing with clients, managerial report writing and presentations, etc.
These activities are certainly numerous, varied and difficult to enumerate, but these
activities can be broadly classified into project planning,and project monitoring and
control activities. The project planning activity is undertaken before the development starts
to plan the activities to be undertaken during development. The project monitoring and
control activities are undertaken once the development activities start with the aim of
ensuring that the development proceeds as per plan and changing the plan whenever
required to cope up with the situation.

Skills necessary for software project management


A theoretical knowledge of different project management techniques is certainly necessary
to become a successful project manager. However, effective software project
management frequently calls for good qualitative judgment and decision taking
capabilities. In addition to having a good grasp of the latest software project management
techniques such as cost estimation, risk management, configuration management, project
managers need good communication skills and the ability get work done. However, some
skills such as tracking and controlling the progress of the project, customer interaction,
managerial presentations, and team building are largely acquired through experience.
None the less, the importance of sound knowledge of the prevalent projectmanagement
techniques cannot be overemphasized.

Project planning
Once a project is found to be feasible, software project managers undertake project
planning. Project planning is undertaken and completed even before any development
activity starts. Project planning consists of the following essential activities:

• Estimating the following attributes of the project:


Project size: What will be problem complexity in terms of the effortand
time required to develop the product?
Cost: How much is it going to cost to develop the project? Duration:
How long is it going to take to complete development?Effort: How
much effort would be required?
The effectiveness of the subsequent planning activities is based on the
accuracy of these estimations.

• Scheduling manpower and other resources

• Staff organization and staffing plans

• Risk identification, analysis, and abatement planning

• Miscellaneous plans such as quality assurance plan, configuration


management plan, etc.

Precedence ordering among project planning activities


Different project related estimates done by a project manager have already been
discussed. Fig. 11.1 shows the order in which important project planning activities may
be undertaken. From fig. 11.1 it can be easily observed that size estimation is the first
activity. It is also the most fundamental parameter based on which all other planning
activities are carried out. Other estimations such as estimation of effort, cost, resource,
and project duration are also very important components of project planning.
Fig. 11.1: Precedence ordering among planning activities
Sliding Window Planning

Project planning requires utmost care and attention since commitment to unrealistic time and
resource estimates result in schedule slippage. Schedule delays can cause customer
dissatisfaction and adversely affect team morale. It can even cause project failure. However,
project planning is a very challenging activity. Especially for large projects, it is very much
difficult to make accurate plans. A part of this difficulty is due to the fact that the proper
parameters, scope of the project, project staff, etc. may change during the span of the project.
In order to overcome this problem, sometimes project managers undertake project planning in
stages. Planning a project over a number of stages protects managers from making big
commitments too early. This technique of staggered planning is known as Sliding Window
Planning. In the sliding window technique, starting with an initial plan, the project is planned
more accurately in successive development stages. At the start of a project, project managers
have incomplete knowledge about the details of the project. Their information base gradually
improves as the project progresses through different phases. After thecompletion of every
phase, the project managers can plan each subsequent phase more accurately and with
increasing levels of confidence.

Software Project Management Plan (SPMP)


Once project planning is complete, project managers document their plans in a Software Project
Management Plan (SPMP) document. The SPMP document should discuss a list of different
items that have been discussed below. This list can be used as a possible organization of the
SPMP document.

Organization of the Software Project Management Plan (SPMP) Document

1. Introduction
(a) Objectives
(b) Major Functions
(c) Performance Issues
(d) Management and Technical Constraints

2. Project Estimates
(a) Historical Data Used
(b) Estimation Techniques Used
(c) Effort, Resource, Cost, and Project Duration Estimates

3. Schedule
(a) Work Breakdown Structure
(b) Task Network Representation
(c) Gantt Chart Representation
(d) PERT Chart Representation

4. Project Resources

(a) People
(b) Hardware and Software
(c) Special Resources

5. Staff Organization
(a) Team Structure
(b) Management Reporting

6. Risk Management Plan

(a) Risk Analysis


(b) Risk Identification
(c) Risk Estimation
(d) Risk Abatement Procedures

7. Project Tracking and Control Plan

8. Miscellaneous Plans

(a) Process Tailoring


(b) Quality Assurance Plan
(c) Configuration Management Plan
(d) Validation and Verification
(e) System Testing Plan
(f) Delivery, Installation, and Maintenance Plan

Metrics for software project size estimation


Accurate estimation of the problem size is fundamental to satisfactory estimation of effort, time
duration and cost of a software project. In order to be able to accurately estimate the project
size, some important metrics should be defined in terms of which the project size can be
expressed. The size of a problem is obviously not the number of bytes that the source code
occupies. It is neither the byte size of the executable code. The project size is a measure of the
problem complexity in terms of the effort and time required to develop the product.

Currently two metrics are popularly being used widely to estimate size: lines of code
(LOC) and function point (FP). The usage of each of these metricsin project size estimation
has its own advantages and disadvantages.

Lines of Code (LOC)

LOC is the simplest among all metrics available to estimate project size. This metric is very
popular because it is the simplest to use. Using this metric, the project size is estimated by
counting the number of source instructions in the developed program. Obviously, while counting
the number of source instructions,lines used for commenting the code and the header lines
should be ignored.

Determining the LOC count at the end of a project is a very simple job. However,
accurate estimation of the LOC count at the beginning of a project is very difficult. In order to
estimate the LOC count at the beginning of a project, project managers usually divide the
problem into modules, and each module into submodules and so on, until the sizes of the
different leaf-level modules can be approximately predicted. To be able to do this, past
experience in developing similar products is helpful. By using the estimation of the lowest level
modules, project managers arrive at the total size estimation.

Function point (FP)

Function point metric was proposed by Albrecht [1983]. This metric overcomes many of the
shortcomings of the LOC metric. Since its inception in late 1970s, function point metric has
been slowly gaining popularity. One of the important advantages of using the function point
metric is that it can be used to easily estimate the size of a software product directly from the
problem specification. This is in contrast to the LOC metric, where the size can be accurately
determined only after the product has fully been developed.
The conceptual idea behind the function point metric is that the size of a software product is
directly dependent on the number of different functions or features it supports. A software
product supporting many features would certainlybe of larger size than a product with less
number of features. Each function wheninvoked reads some input data and transforms it to the
corresponding output data. For example, the issue book feature (as shown in fig. 11.2) of a
Library Automation Software takes the name of the book as input and displays itslocation and
the number of copies available. Thus, a computation of the number of input and the output data
values to a system gives some indication of the number of functions supported by the system.
Albrecht postulated that in additionto the number of basic functions that a software performs,
the size is alsodependent on the number of files and the number of interfaces.

Fig. 11.2: System function as a map of input data to output data

Besides using the number of input and output data values, function point metric computes the
size of a software product (in units of functions points or FPs) using three other characteristics
of the product as shown in the following expression. The size of a product in function points
(FP) can be expressed as theweighted sum of these five problem characteristics. The weights
associated with the five characteristics were proposed empirically and validated by the
observations over many projects. Function point is computed in two steps. The first step is to
compute the unadjusted function point (UFP).

UFP = (Number of inputs)*4 + (Number of outputs)*5 + (Number of


inquiries)*4 + (Number of files)*10 +
(Number of interfaces)*10

Number of inputs: Each data item input by the user is counted. Data inputs should be
distinguished from user inquiries. Inquiries are user commands such as print-account-balance.
Inquiries are counted separately. It must be noted that individual data items input by the user
are not considered in the calculation of thenumber of inputs, but a group of related inputs are
considered as a single input.

For example, while entering the data concerning an employee to an employee pay roll software;
the data items name, age, sex, address, phone number, etc.are together considered as a
single input. All these data items can be consideredto be related, since they pertain to a single
employee.

Number of outputs: The outputs considered refer to reports printed, screen outputs, error
messages produced, etc. While outputting the number of outputs the individual data items
within a report are not considered, but a set of related data items is counted as one input.

Number of inquiries: Number of inquiries is the number of distinct interactive queries which
can be made by the users. These inquiries are the user commands which require specific
action by the system.

Number of files: Each logical file is counted. A logical file means groups of logically related
data. Thus, logical files can be data structures or physical files.

Number of interfaces: Here the interfaces considered are the interfaces used to exchange
information with other systems. Examples of such interfaces aredata files on tapes, disks,
communication links with other systems etc.

Once the unadjusted function point (UFP) is computed, the technical complexity factor
(TCF) is computed next. TCF refines the UFP measure by considering fourteen other factors
such as high transaction rates, throughput,and response time requirements, etc. Each of
these 14 factors is assigned from 0(not present or no influence) to 6 (strong influence). The
resulting numbers are summed, yielding the total degree of influence (DI). Now, TCF is
computed as (0.65+0.01*DI). As DI can vary from 0 to 70, TCF can vary from 0.65 to 1.35.
Finally, FP=UFP*TCF.

Shortcomings of function point (FP) metric


LOC as a measure of problem size has several shortcomings:
• LOC gives a numerical value of problem size that can vary widely with individual
coding style – different programmers lay out their code in different ways. For
example, one programmer might write several source instructions on a single line
whereas another might split asingle instruction across several lines. Of course, this
problem can be easily overcome by counting the language tokens in the program
rather than the lines of code. However, a more intricate problem arises because the
length of a program depends on the choice of instructions used in writing the
program. Therefore, even for the same problem, different programmers might come
up with programs having different LOC counts. This situation does not improve even
if language tokens are counted instead of lines of code.

• A good problem size measure should consider the overall complexity of the
problem and the effort needed to solve it. That is, it should consider the local effort
needed to specify, design, code, test, etc. and not just the coding effort. LOC,
however, focuses on the coding activity alone; it merely computes the number of
source lines in the finalprogram. We have already seen that coding is only a small
part of the overall software development activities. It is also wrong to argue that the
overall product development effort is proportional to the effort required in writing the
program code. This is because even though the design might be very complex, the
code might be straightforward and vice versa. In such cases, code size is a grossly
improper indicator of the problem size.

• LOC measure correlates poorly with the quality and efficiency of the code. Larger
code size does not necessarily imply better quality or higher efficiency. Some
programmers produce lengthy and complicated code as they do not make effective
use of the available instruction set. In fact, it is very likely that a poor and sloppily
written piece of code might have larger number of source instructions than a piece
that is neat and efficient.

• LOC metric penalizes use of higher-level programming languages,code reuse, etc.


The paradox is that if a programmer consciously uses several library routines, then
the LOC count will be lower. This would show up as smaller program size. Thus, if
managers use the LOC count as a measure of the effort put in the different
engineers (that is, productivity), they would be discouraging code reuse by
engineers.
• LOC metric measures the lexical complexity of a program and does notaddress the
more important but subtle issues of logical or structural complexities. Between two
programs with equal LOC count, a program having complex logic would require
much more effort to develop than aprogram with very simple logic. To realize why
this is so, consider the effort required to develop a program having multiple nested
loop and decision constructs with another program having only sequential control
flow.

• It is very difficult to accurately estimate LOC in the final product from the problem
specification. The LOC count can be accurately computed only after the code has
been fully developed. Therefore, the LOCmetric is little use to the project
managers during project planning,since project planning is carried out even before
any development activity has started. This possibly is the biggest shortcoming
of theLOC metric from the project manager’s perspective.

Feature point metric


A major shortcoming of the function point measure is that it does not take into account the
algorithmic complexity of a software. That is, the function point metric implicitly assumes that
the effort required to design and develop any two functionalities of the system is the same. But,
we know that this is normally not true, the effort required to develop any two functionalities may
vary widely. It onlytakes the number of functions that the system supports into consideration
withoutdistinguishing the difficulty level of developing the various functionalities. Toovercome
this problem, an extension of the function point metric called feature point metric is proposed.

Feature point metric incorporates an extra parameter algorithm complexity. This


parameter ensures that the computed size using the feature point metric reflects the fact that
the more is the complexity of a function, the greater is the effort required to develop it and
therefore its size should be larger compared to simpler functions.

Project Estimation techniques


Estimation of various project parameters is a basic project planning activity. The important
project parameters that are estimated include: project size, effort required to develop the
software, project duration, and cost. These estimates not only help in quoting the project cost
to the customer, but are also useful in resource planning and scheduling. There are three broad
categories of estimation techniques:
• Empirical estimation techniques
• Heuristic techniques
• Analytical estimation techniques

Empirical Estimation Techniques


Empirical estimation techniques are based on making an educated guess of the project
parameters. While using this technique, prior experience with development of similar products
is helpful. Although empirical estimation techniques are based on common sense, different
activities involved in estimation have been formalized over the years. Two popular empirical
estimation techniques are: Expert judgment technique and Delphi cost estimation.

Expert Judgment Technique


Expert judgment is one of the most widely used estimation techniques. In this
approach, an expert makes an educated guess of the problem size after analyzing the
problem thoroughly. Usually, the expert estimates the cost of the different components
(i.e. modules or subsystems) of the system and then combines them to arrive at the
overallestimate. However, this technique is subject to human errors and individual bias.
Also, it is possible that the expert may overlook some factors inadvertently. Further, an
expert making an estimate may not have experience and knowledge of all aspects of a
project. For example, he may be conversant with the database and user interface parts
but may notbe very knowledgeable about the computer communication part.

A more refined form of expert judgment is the estimation made by group of


experts. Estimation by a group of experts minimizes factors such as individual oversight,
lack of familiarity with a particular aspect of a project, personal bias, and the desire to
win contract through overly optimistic estimates. However, the estimate made by a
group of experts may still exhibit bias on issues where the entire group of experts may
be biased due to reasons such as political considerations. Also, the decision made by
the group may be dominated by overly assertive members.

Delphi cost estimation


Delphi cost estimation approach tries to overcome some of the shortcomings of the
expert judgment approach. Delphi estimation is carried out by a team comprising of a
group of experts and a coordinator. In this approach, the coordinator provides each
estimator with a copy of the software requirements specification (SRS) document and
a form for recording his cost estimate. Estimators complete their individual estimates
anonymously and submit to the coordinator. In their estimates, the estimators mention
any unusual characteristic of the product which has influenced his estimation. The
coordinator prepares and distributes the summary of the responses of all the estimators,
and includes any unusual rationale noted by any of the estimators. Based on this
summary, the estimators re-estimate. This process is iterated for several rounds.
However, no discussion among the estimators is allowed during the entire estimation
process. The idea behind this is that if any discussion is allowed among the estimators,
then many estimators may easily get influenced by the rationale of an estimator who
may be more experienced or senior. After the completion of several iterations of
estimations, the coordinator takes the responsibility of compiling the results and
preparing the final estimate.

Heuristic Techniques
Heuristic techniques assume that the relationships among the different project parameters can
be modeled using suitable mathematical expressions. Once the basic (independent)
parameters are known, the other (dependent) parameters can be easily determined by
substituting the value of the basic parameters in the
Mathematical expression. Different heuristic estimation models can be divided into the following
two classes: single variable model and the multi variable model.

Single variable estimation models provide a means to estimate the desired


characteristics of a problem, using some previously estimated basic (independent)
characteristic of the software product such as its size. A single variable estimation model takes
the following form:
d
Estimated Parameter = c1 * e 1

In the above expression, e is the characteristic of the software which has already been
estimated (independent variable). Estimated Parameter is the dependent parameter to be
estimated. The dependent parameter to be estimatedcould be effort, project duration, staff size,
etc. c1 and d1 are constants. The values of the constants c1 and d1 are usually determined using
data collected from past projects (historical data). The basic COCOMO model is an example of
single variable cost estimation model.

A multivariable cost estimation model takes the following form:Estimated

d d
Resource = c1*e1 1 + c2*e2 2 + ...

Where e1, e2, … are the basic (independent) characteristics of the software
already estimated, and c1, c2, d1, d2, … are constants. Multivariable estimation models are
expected to give more accurate estimates compared to the single variable models, since a
project parameter is typically influenced by several independent parameters. The independent
parameters influence the dependent parameter to different extents. This is modeled by the
constants c1, c2, d1, d2, … .Values of these constants are usually determined from historical
data. The intermediate COCOMO model can be considered to be an example of a multivariable
estimation model.

Analytical Estimation Techniques


Analytical estimation techniques derive the required results starting with basic assumptions
regarding the project. Thus, unlike empirical and heuristic techniques, analytical techniques do
have scientific basis. Halstead’s software science is an example of an analytical technique.
Halstead’s software sciencecan be used to derive some interesting results starting with a few
simple assumptions. Halstead’s software science is especially useful for estimating software
maintenance efforts. In fact, it outperforms both empirical and heuristic techniques when used
for predicting software maintenance efforts.

Halstead’s Software Science – An Analytical Technique


Halstead’s software science is an analytical technique to measure size, development
effort, and development cost of software products. Halstead used a few primitive
program parameters to develop the expressions for over all program length, potential
minimum value, actual volume, effort, and development time.

For a given program, let:

▪ η1 be the number of unique operators used in the program,


▪ η2 be the number of unique operands used in the program,
▪ N1 be the total number of operators used in the program,
▪ N2 be the total number of operands used in the program.

Length and Vocabulary


The length of a program as defined by Halstead, quantifies total usage of all
operators and operands in the program. Thus, length N = N 1 +N2. Halstead’s
definition of the length of the program as the total number ofoperators and operands
roughly agrees with the intuitive notation ofthe program length as the total number
of tokens used in the program.
The program vocabulary is the number of unique operators andoperands
used in the program. Thus, program vocabulary η = η1 + η2.

Program Volume
The length of a program (i.e. the total number of operators and operands used in
the code) depends on the choice of the operatorsand operands used. In other
words, for the same programming problem, the length would depend on the
programming style. This typeof dependency would produce different measures of
length for essentially the same problem when different programming languages are
used. Thus, while expressing program size, the programming language used must
be taken into consideration:

V = Nlog2η

Here the program volume V is the minimum number of bits needed to encode the
program. In fact, to represent η different identifiers uniquely, at least log2η bits (where
η is the program vocabulary) will be needed. In this scheme, Nlog2η bits will be
needed to store a programof length N. Therefore, the volume V represents the size
of the program by approximately compensating for the effect of the programming
language used.
Potential Minimum Volume
The potential minimum volume V* is defined as the volume of most succinct program
in which a problem can be coded. The minimum volume is obtained when the
program can be expressed using a singlesource code instruction., say a function
call like foo( ) ;. In other words, the volume is bound from below due to the fact that
a program would have at least two operators and no less than the requisite number
of operands.
Thus, if an algorithm operates on input and output data d 1, d2, …dn, the most
succinct program would be f(d1, d2, … dn); for which η1 = 2, η2 = n. Therefore, V* =
(2 + η2)log2(2 + η2).
The program level L is given by L = V*/V. The concept of programlevel L is
introduced in an attempt to measure the level of abstractionprovided by
the programming language. Using this definition,
languages can be ranked into levels that also appear intuitively correct. The above
result implies that the higher the level of a language,
the less effort it takes to develop a program using that language. This result agrees
with the intuitive notion that it takes more effort to develop a program in assembly
language than to develop a program ina high-level language to solve a problem.

Effort and Time


The effort required to develop a program can be obtained by dividing the program
volume with the level of the programming language used to develop the code. Thus,
effort E = V/L, where E is the number of mental discriminations required to
implement the program and also the effort required to read and understand the
program. Thus, the programming effort E = V²/V* (since L = V*/V) varies as the
square of the volume. Experience shows that E is well correlated to the effort
needed for maintenance of an existing program.
The programmer’s time T = E/S, where S the speed of mental discriminations.
The value of S has been empirically developed from psychological reasoning, and
its recommended value for programming applications is 18.

Length Estimation
Even though the length of a program can be found by calculating the total number
of operators and operands in a program, Halstead suggests a way to determine the
length of a program using the numberof unique operators and operands used in the
program. Using this method, the program parameters such as length, volume, cost,
effort, etc. can be determined even before the start of any programming activity. His
method is summarized below.
Halstead assumed that it is quite unlikely that a program has several identical
parts – in formal language terminology identical substrings – of length greater than
η (η being the program vocabulary).In fact, once a piece of code occurs identically at
several places, it is made into a procedure or a function. Thus, it can be assumed
that any program of length N consists of N/ η unique strings of length η. Now, itis
standard combinatorial result that for any given alphabet of size K, there are exactly
Kr different strings of length r.

Thus.
N/η ≤ ηη Or, N ≤ ηη+1
Since operators and operands usually alternate in a program, the upper bound
can be further refined into N ≤ η η1η1 η2η2. Also, N mustinclude not only the ordered
set of n elements, but it should also include all possible subsets of that ordered
sets, i.e. the power set of Nstrings (This particular reasoning of Halstead is not very
convincing!!!).
Therefore,
2N = η η1η1 η2η2
Or, taking logarithm on both sides,
N = log2η +log 2(η1η1 η2η2)So we get,

Or,
N = log 2(η1η1 η2η2)
(approximately, by ignoring log2η)

N = log2η1η1 + log2η2η2
= η1log2η1 + η2log2η2

Experimental evidence gathered from the analysis of larger number of


programs suggests that the computed and actual lengths match very closely.
However, the results may be inaccurate when small programs when considered
individually.

In conclusion, Halstead’s theory tries to provide a formal definition and


quantification of such qualitative attributes as program complexity, ease of
understanding, and the level of abstraction based on some low-level parameters
such as the number of operands, and operators appearing in the program.
Halstead’s software science provides gross estimation of properties of a large
collection of software, but extends to individual cases rather inaccurately.

COCOMO Model

Cocomo (Constructive Cost Model) is a regression model based on LOC,


i.e number of Lines of Code. It is a procedural cost estimate model for software
projects and often used as a process of reliably predicting the various parameters
associated with making a project such as size, effort, cost, time and quality. It was
proposed by Barry Boehm in 1970 and is based on the study of 63 projects, which
make it one of the best-documented models.
The key parameters which define the quality of any software products, which are also
an outcome of the Cocomo are primarily Effort & Schedule:
• Effort: Amount of labor that will be required to complete a task. It is measured
in person-months units.
• Schedule: Simply means the amount of time required for the completion of the
job, which is, of course, proportional to the effort put. It is measured in the units
of time such as weeks, months.
Different models of Cocomo have been proposed to predict the cost estimation at
different levels, based on the amount of accuracy and correctness required. All of
these models can be applied to a variety of projects, whose characteristics determine
the value of constant to be used in subsequent calculations. These characteristics
pertaining to different system types are mentioned below.
Boehm’s definition of organic, semidetached, and embedded systems:
1. Organic – A software project is said to be an organic type if the team size
required is adequately small, the problem is well understood and has been solved
in the past and also the team members have a nominal experience regarding the
problem.
2. Semi-detached – A software project is said to be a Semi-detached type if the
vital characteristics such as team-size, experience, knowledge of the various
programming environment lie in between that of organic and Embedded. The
projects classified as Semi-Detached are comparatively less familiar and difficult
to develop compared to the organic ones and require more experience and better
guidance and creativity. Eg: Compilers or different Embedded Systems can be
considered of Semi-Detached type.
3. Embedded – A software project with requiring the highest level of complexity,
creativity, and experience requirement fall under this category. Such software
requires a larger team size than the other two models and also the developers
need to be sufficiently experienced and creative to develop such complexmodels.
All the above system types utilize different values of the constants used in Effort
Calculations.
Types of Models: COCOMO consists of a hierarchy of three increasingly detailed and
accurate forms. Any of the three forms can be adopted according to our requirements.
These are types of COCOMO model:
1. Basic COCOMO Model
2. Intermediate COCOMO Model
3. Detailed COCOMO Model

The first level, Basic COCOMO can be used for quick and slightly rough calculations
of Software Costs. Its accuracy is somewhat restricted due to the absence of sufficient
factor considerations.

Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO
additionally accounts for the influence of individual project phases, i.e in case of
Detailed it accounts for both these cost drivers and also calculations are performed
phase wise henceforth producing a more accurate result. These two models are further
discussed below.

Estimation of Effort: Calculations –

Basic Model –

E=a(KLOC)b
Time=c(Effort)d
Person required=Effort/Time

The above formula is used for the cost estimation of for the basic COCOMO
model, and also is used in the subsequent models. The constant values a,b,c and
d for the Basic Model for the different categories of system:

SOFTWARE PROJECTS
SOFTWAR

PROJECTS A B C D

Organic 2.4 1.05 2.5 0.38

Semi

Detached 3.0 1.12 2.5 0.35

Embedde

D 3.6 1.20 2.5 0.32


The effort is measured in Person-Months and as evident from the formula is
dependent on Kilo-Lines of code.
The development time is measured in Months.
Intermediate Model –
The basic Cocomo model assumes that the effort is only a function of the number of
lines of code and some constants evaluated according to the different software
system. However, in reality, no system’s effort and schedule can be solely calculated
on the basis of Lines of Code. For that, various other factors such as reliability,
experience, Capability. These factors are known as Cost Drivers and the Intermediate
Model utilizes 15 such drivers for cost estimation.
Classification of Cost Drivers and their attributes:
(i) Product attributes –
• Required software reliability extent
• Size of the application database
• The complexity of the product
(ii) Hardware attributes –
• Run-time performance constraints
• Memory constraints
• The volatility of the virtual machine environment
• Required turnabout time
(iii) Personnel attributes –
• Analyst capability
• Software engineering capability
• Applications experience
• Virtual machine experience
• Programming language experience
(iv) Project attributes –
• Use of software tools
• Application of software engineering methods
• Required development schedule
;
The Intermediate COCOMO formula now takes the form:
The values of a and b in case of the intermediate model are as follows:

SOFTWARE PROJECTS A B

Organic 3.2 1.05

Semi Detached 3.0 1.12

Embeddedc 2.8 1.20

DetailedModel –
Detailed COCOMO incorporates all characteristics of the intermediate version with
an assessment of the cost driver’s impact on each step of the software engineering
process. The detailed model uses different effort multipliers for each cost driver
attribute. In detailed cocomo, the whole software is divided into different
modules and then we apply COCOMO in different modules to estimate effort
and then sum the effort.
The Six phases of detailed COCOMO are:
1. Planning and requirements
2. System design
3. Detailed design
4. Module code and test
5. Integration and test
6. Cost Constructive model
The effort is calculated as a function of program size and a set of cost
driversare given according to each phase of the software lifecycle.

What is Risk?
"Tomorrow problems are today's risk." Hence, a clear definition of a "risk" is a problem
that could cause some loss or threaten the progress of the project, but which has not
happened yet.

These potential issues might harm cost, schedule or technical success of the project
and the quality of our software device, or project team morale.

Risk Management is the system of identifying addressing and eliminating these


problems before they can damage the project.

We need to differentiate risks, as potential issues, from the current problems of the
project.

Different methods are required to address these two kinds of issues.

For example, staff storage, because we have not been able to select people with the
right technical skills is a current problem, but the threat of our technical persons being
hired away by the competition is a risk.

Risk Management
A software project can be concerned with a large variety of risks. In order to be adept
to systematically identify the significant risks which might affect a software project, it is
essential to classify risks into different classes. The project manager can then check
which risks from each class are relevant to the project.

There are three main classifications of risks which can affect a software project:

1. Project risks
2. Technical risks
3. Business risks

1. Project risks: Project risks concern differ forms of budgetary, schedule, personnel,
resource, and customer-related problems. A vital project risk is schedule slippage.
Since the software is intangible, it is very tough to monitor and control a software
project. It is very tough to control something which cannot be identified. For any
manufacturing program, such as the manufacturing of cars, the plan executive can
recognize the product taking shape.

2. Technical risks: Technical risks concern potential method, implementation,


interfacing, testing, and maintenance issue. It also consists of an ambiguous
specification, incomplete specification, changing specification, technical uncertainty,
and technical obsolescence. Most technical risks appear due to the development team's
insufficient knowledge about the project.

3. Business risks: This type of risks contain risks of building an excellent product that
no one need, losing budgetary or personnel commitments, etc.

Other risk categories

1. 1. Known risks: Those risks that can be uncovered after careful assessment of
the project program, the business and technical environment in which the plan
is being developed, and more reliable data sources (e.g., unrealistic delivery
date)
2. 2. Predictable risks: Those risks that are hypothesized from previous project
experience (e.g., past turnover)
3. 3. Unpredictable risks: Those risks that can and do occur, but are extremely
tough to identify in advance.

Risk Management Activities


Risk management consists of three main activities, as shown in fig:
Risk Assessment
The objective of risk assessment is to division the risks in the condition of their loss,
causing potential. For risk assessment, first, every risk should be rated in two
methods:

o The possibility of a risk coming true (denoted as r).


o The consequence of the issues relates to that risk (denoted as s).

Based on these two methods, the priority of each risk can be estimated:

p=r*s

Where p is the priority with which the risk must be controlled, r is the probability of the
risk becoming true, and s is the severity of loss caused due to the risk becoming true.
If all identified risks are set up, then the most likely and damaging risks can be
controlled first, and more comprehensive risk abatement methods can be designed for
these risks.

1. Risk Identification: The project organizer needs to anticipate the risk in the
project as early as possible so that the impact of risk can be reduced by making
effective risk management planning.

A project can be of use by a large variety of risk. To identify the significant risk, this
might affect a project. It is necessary to categories into the different risk of classes.

There are different types of risks which can affect a software project:
1. Technology risks: Risks that assume from the software or hardware
technologies that are used to develop the system.
2. People risks: Risks that are connected with the person in the development
team.
3. Organizational risks: Risks that assume from the organizational environment
where the software is being developed.
4. Tools risks: Risks that assume from the software tools and other support
software used to create the system.
5. Requirement risks: Risks that assume from the changes to the customer
requirement and the process of managing the requirements change.
6. Estimation risks: Risks that assume from the management estimates of the
resources required to build the system

2. Risk Analysis: During the risk analysis process, you have to consider every
identified risk and make a perception of the probability and seriousness of that risk.

There is no simple way to do this. You have to rely on your perception and experience
of previous projects and the problems that arise in them.

It is not possible to make an exact, the numerical estimate of the probability and
seriousness of each risk. Instead, you should authorize the risk to one of several
bands:

1. The probability of the risk might be determined as very low (0-10%), low (10-
25%), moderate (25-50%), high (50-75%) or very high (+75%).
2. The effect of the risk might be determined as catastrophic (threaten the survival
of the plan), serious (would cause significant delays), tolerable (delays are
within allowed contingency), or insignificant.

Risk Control
It is the process of managing risks to achieve desired outcomes. After all, the identified
risks of a plan are determined; the project must be made to include the most harmful
and the most likely risks. Different risks need different containment methods. In fact,
most risks need ingenuity on the part of the project manager in tackling the risk.

There are three main methods to plan for risk management:

1. Avoid the risk: This may take several ways such as discussing with the client
to change the requirements to decrease the scope of the work, giving incentives
to the engineers to avoid the risk of human resources turnover, etc.
2. Transfer the risk: This method involves getting the risky element developed
by a third party, buying insurance cover, etc.
3. Risk reduction: This means planning method to include the loss due to risk.
For instance, if there is a risk that some key personnel might leave, new
recruitment can be planned.

Risk Leverage: To choose between the various methods of handling risk, the project
plan must consider the amount of controlling the risk and the corresponding reduction
of risk. For this, the risk leverage of the various risks can be estimated.

Risk leverage is the variation in risk exposure divided by the amount of reducing the
risk.

Risk leverage = (risk exposure before reduction - risk exposure after


reduction) / (cost of reduction)

1. Risk planning: The risk planning method considers each of the key risks that have
been identified and develop ways to maintain these risks.

For each of the risks, you have to think of the behavior that you may take to minimize
the disruption to the plan if the issue identified in the risk occurs.

You also should think about data that you might need to collect while monitoring the
plan so that issues can be anticipated.

Again, there is no easy process that can be followed for contingency planning. It rely
on the judgment and experience of the project manager.

2. Risk Monitoring: Risk monitoring is the method king that your assumption about
the product, process, and business risks has not changed.
Software Requirement Specification document (SRS)

Following are the characteristics of a good SRS document:


1. Correctness:
User review is used to ensure the correctness of requirements stated in the SRS. SRS is
said to be correct if it covers all the requirements that are actually expected from the
system.
2. Completeness:
Completeness of SRS indicates every sense of completion including the numbering of all
the pages, resolving the to be determined parts to as much extent as possible as well as
covering all the functional and non-functional requirements properly.
3. Consistency:
Requirements in SRS are said to be consistent if there are no conflicts between any set
of requirements. Examples of conflict include differences in terminologies used at
separate places, logical conflicts like time period of report generation, etc.
4. Unambiguousness:
An SRS is said to be unambiguous if all the requirements stated have only 1
interpretation. Some of the ways to prevent unambiguousness include the use of
modelling techniques like ER diagrams, proper reviews and buddy checks, etc.
5. Modifiability:
SRS should be made as modifiable as possible and should be capable of easily
accepting changes to the system to some extent. Modifications should be properly
indexed and cross-referenced.
6. Verifiability:
An SRS is verifiable if there exists a specific technique to quantifiably measure the
extent to which every requirement is met by the system. For example, a requirement
stating that the system must be user-friendly is not verifiable and listing such
requirements should be avoided.
7. Traceability:
One should be able to trace a requirement to a design component and then to a code
segment in the program. Similarly, one should be able to trace a requirement to the
corresponding test cases.
8. Design Independence:
There should be an option to choose from multiple design alternatives for the final
system. More specifically, the SRS should not include any implementation details.
9. Testability:
An SRS should be written in such a way that it is easy to generate test cases and test
plans from the document.
10. Understandable by the customer:
An end user maybe an expert in his/her specific domain but might not be an expert in
computer science. Hence, the use of formal notations and symbols should be avoided
to as much extent as possible. The language should be kept easy and clear.
11. Right level of abstraction:
If the SRS is written for the requirements phase, the details should be explained
explicitly. Whereas, for a feasibility study, fewer details can be used. Hence, the level
of abstraction varies according to the purpose of the SRS.

Introduction to Components of the SRS

In previous section, we discussed various characteristics that will help in completely


specification the requirements. Here we describe some of system properties that an SRS
should specify. The basic issues, an SRS must address are:

Functional requirements

Performance requirements

Design constraints

External interface requirements

Conceptually, any SRS should have these components. Now we will discuss them one by one.

1. Functional Requirements
Functional requirements specify what output should be produced from the given inputs. So
they basically describe the connectivity between the input and output of the system. For each
functional requirement:

1. A detailed description of all the data inputs and their sources, the units of measure, and
the range of valid inputs be specified:

2. All the operations to be performed on the input data obtain the output should be specified,
and

3. Care must be taken not to specify any algorithms that are not parts of the system but that
may be needed to implement the system.

4. It must clearly state what the system should do if system behaves abnormally when
any invalid input is given or due to some error during computation. Specifically, it should
specify the behaviour of the system for invalid inputs and invalid outputs.

2. Performance Requirements (Speed Requirements)

This part of an SRS specifies the performance constraints on the software system. All the
requirements related to the performance characteristics of the system must be clearly
specified. Performance requirements are typically expressed as processed transaction s per
second or response time from the system for a user event or screen refresh time or a
combination of these. It is a good idea to pin down performance requirements for the most
used or critical transactions, user events and screens.

2. Design Constraints

The client environment may restrict the designer to include some design constraints that must
be followed. The various design constraints are standard compliance, resource limits,
operating environment, reliability and security requirements and policies that may have an
impact on the design of the system. An SRS should identify and specify all such constraints.
Standard Compliance: It specifies the requirements for the standard the system must follow.
The standards may include the report format and according procedures.

Hardware Limitations: The software needs some existing or predetermined hardware to


operate, thus imposing restrictions on the design. Hardware limitations can includes the types
of machines to be used operating system availability memory space etc.

Fault Tolerance: Fault tolerance requirements can place a major constraint on how the
system is to be designed. Fault tolerance requirements often make the system more complex
and expensive, so they should be minimized.

Security: Currently security requirements have become essential and major for all types of
systems. Security requirements place restriction s on the use of certain commands control
access to database, provide different kinds of access, requirements for different people,
require the use of passwords and cryptography techniques, and maintain a log of activities in
the system.

4. External Interface Requirements

For each external interface requirements:

1. All the possible interactions of the software with people hardware and other software should
be clearly specified,

2. The characteristics of each user interface of the software product should be specified and

3. The SRS should specify the logical characteristics of each interface between the software
product and the hardware components for hardware interfacing.

Properties of a good SRS document

The essential properties of a good SRS document are the following:

Concise: The SRS report should be concise and at the same time, unambiguous, consistent,
and complete. Verbose and irrelevant descriptions decrease readability and also increase error
possibilities.

Structured: It should be well-structured. A well-structured document is simple to understand


and modify. In practice, the SRS document undergoes several revisions to cope up with the
user requirements. Often, user requirements evolve over a period of time. Therefore, to make
the modifications to the SRS document easy, it is vital to make the report well-structured.

Black-box view: It should only define what the system should do and refrain from stating how
to do these. This means that the SRS document should define the external behavior of the
system and not discuss the implementation issues. The SRS report should view the system to
be developed as a black box and should define the externally visible behavior of the system.
For this reason, the SRS report is also known as the black-box specification of a system.

Conceptual integrity: It should show conceptual integrity so that the reader can merely
understand it. Response to undesired events: It should characterize acceptable responses to
unwanted events. These are called system response to exceptional conditions.
Verifiable: All requirements of the system, as documented in the SRS document, should be
correct. This means that it should be possible to decide whether or not requirements have
been met in an implementation.

Structure of SRS

Decision tree

A decision tree is a map of the possible outcomes of a series of related choices. It allows an
individual or organization to weigh possible actions against one another based on their costs,
probabilities, and benefits.

As the name goes, it uses a tree-like model of decisions. They can be used either to drive
informal discussion or to map out an algorithm that predicts the best choice mathematically.
A decision tree typically starts with a single node, which branches into possible outcomes.
Each of those outcomes leads to additional nodes, which branch off into other possibilities.
This gives it a tree-like shape.

Decision table is a brief visual representation for specifying which actions to perform
depending on given conditions. The information represented in decision tables can also be
represented as decision trees or in a programming language using if-then-else and switch-
case statements.
A decision table is a good way to settle with different combination inputs with their
corresponding outputs and also called cause-effect table.
CONDITIONS STEP 1 STEP 2 STEP 3 STEP 4
Condition 1 Y Y N N
Condition 2 Y N Y N
Condition 3 Y N N Y
Condition 4 N Y Y N

MODULE 3
Software Design
Software design is a mechanism to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation. It deals with representing the
client's requirement, as described in SRS (Software Requirement Specification) document, into a
form, i.e., easily implementable using programming language.

The software design phase is the first step in SDLC (Software Design Life Cycle), which
moves the concentration from the problem domain to the solution domain. In software design,
we consider the system to be a set of components or modules with clearly defined behaviors&
boundaries.

Objectives of Software Design


Following are the purposes of Software design:

1. Correctness:Software design should be correct as per requirement.


2. Completeness:The design should have all components like data structures, modules, and
external interfaces, etc.
3. Efficiency:Resources should be used efficiently by the program.
4. Flexibility:Able to modify on changing needs.
5. Consistency:There should not be any inconsistency in the design.
6. Maintainability: The design should be so simple so that it can be easily maintainable by
other designers.

Software Design Principles


Software design principles are concerned with providing means to handle the complexity of the
design process effectively. Effectively managing the complexity will not only reduce the effort
needed for design but can also reduce the scope of introducing errors during design.

Following are the principles of Software Design

Problem Partitioning
For small problem, we can handle the entire problem at once but for the significant problem,
divide the problems and conquer the problem it means to divide the problem into smaller pieces
so that each piece can be captured separately.

For software design, the goal is to divide the problem into manageable pieces.

Benefits of Problem Partitioning


1. Software is easy to understand
2. Software becomes simple
3. Software is easy to test
4. Software is easy to modify
5. Software is easy to maintain
6. Software is easy to expand
These pieces cannot be entirely independent of each other as they together form the system.
They have to cooperate and communicate to solve the problem. This communication adds
complexity.

Abstraction
An abstraction is a tool that enables a designer to consider a component at an abstract level
without bothering about the internal details of the implementation. Abstraction can be used for
existing element as well as the component being designed.

Here, there are two common abstraction mechanisms

1. Functional Abstraction
2. Data Abstraction

Functional Abstraction
i. A module is specified by the method it performs.
ii. The details of the algorithm to accomplish the functions are not visible to the user of the
function.

Functional abstraction forms the basis for Function oriented design approaches.

Data Abstraction

Details of the data elements are not visible to the users of data. Data Abstraction forms the basis
for Object Oriented design approaches.

Modularity
Modularity specifies to the division of software into separate modules which are differently named
and addressed and are integrated later on in to obtain the completely functional software. It is
the only property that allows a program to be intellectually manageable. Single large programs
are difficult to understand and read due to a large number of reference variables, control paths,
global variables, etc.

The desirable properties of a modular system are:

o Each module is a well-defined system that can be used with other applications.
o Each module has single specified objectives.
o Modules can be separately compiled and saved in the library.
o Modules should be easier to use than to build.
o Modules are simpler from outside than inside.

Modular Design

Modular design reduces the design complexity and results in easier and faster implementation by
allowing parallel development of various parts of a system. We discuss a different section of
modular design in detail in this section:

1. Functional Independence: Functional independence is achieved by developing functions that


perform only one kind of task and do not excessively interact with other modules. Independence
is important because it makes implementation more accessible and faster. The independent
modules are easier to maintain, test, and reduce error propagation and can be reused in other
programs as well. Thus, functional independence is a good design feature which ensures software
quality.

It is measured using two criteria:

o Cohesion: It measures the relative function strength of a module.


o Coupling: It measures the relative interdependence among modules.

2. Information hiding: The fundamental of Information hiding suggests that modules can be
characterized by the design decisions that protect from the others, i.e., In other words, modules
should be specified that data include within a module is inaccessible to other modules that do not
need for such information.

The use of information hiding as design criteria for modular system provides the most significant
benefits when modifications are required during testing's and later during software maintenance.
This is because as most data and procedures are hidden from other parts of the software,
inadvertent errors introduced during modifications are less likely to propagate to different
locations within the software.

Strategy of Design
A good system design strategy is to organize the program modules in such a method that are
easy to develop and latter too, change. Structured design methods help developers to deal with
the size and complexity of programs. Analysts generate instructions for the developers about how
code should be composed and how pieces of code should fit together to form a program.

To design a system, there are two possible approaches:

1. Top-down Approach
2. Bottom-up Approach

1. Top-down Approach: This approach starts with the identification of the main components
and then decomposing them into their more detailed sub-components.

2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves
towards up the hierarchy, as shown in fig. This approach is suitable in case of an existing system.
Coupling and Cohesion
Module Coupling
In software engineering, the coupling is the degree of interdependence between software
modules. Two modules that are tightly coupled are strongly dependent on each other. However,
two modules that are loosely coupled are not dependent on each other. Uncoupled
modules have no interdependence at all within them.

The various types of coupling techniques are shown in fig:

A good design is the one that has low coupling. Coupling is measured by the number of
relations between the modules. That is, the coupling increases as the number of calls
between modules increase or the amount of shared data is large. Thus, it can be said
that a design with high coupling will have more errors.

Types of Module Coupling


1. No Direct Coupling: There is no direct coupling between M1 and M2.

In this case, modules are subordinates to different modules. Therefore, no direct coupling.

2. Data Coupling: When data of one module is passed to another module, this is called data
coupling.

3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite data
items such as structure, objects, etc. When the module passes non-global data structure or
entire structure to another module, they are said to be stamp coupled. For example, passing
structure variable in C or object in C++ language to a module.

4. Control Coupling: Control Coupling exists among two modules if data from one module is
used to direct the structure of instruction execution in another.

5. External Coupling: External Coupling arises when two modules share an externally imposed
data format, communication protocols, or device interface. This is related to communication to
external tools and devices.

6. Common Coupling: Two modules are common coupled if they share information through
some global data items.

7. Content Coupling: Content Coupling exists among two modules if they share code, e.g., a
branch from one module into another module.

Module Cohesion
In computer programming, cohesion defines to the degree to which the elements of a module
belong together. Thus, cohesion measures the strength of relationships between pieces
of functionality within a given module. For example, in highly cohesive systems, functionality
is strongly related.

Cohesion is an ordinal type of measurement and is generally described as "high cohesion" or


"low cohesion."
Types of Modules Cohesion

1. Functional Cohesion: Functional Cohesion is said to exist if the different elements of a


module, cooperate to achieve a single function.
2. Sequential Cohesion: A module is said to possess sequential cohesion if the element of
a module form the components of the sequence, where the output from one component of
the sequence is input to the next.
3. Communicational Cohesion: A module is said to have communicational cohesion, if all
tasks of the module refer to or update the same data structure, e.g., the set of functions
defined on an array or a stack.
4. Procedural Cohesion: A module is said to be procedural cohesion if the set of purpose of
the module are all parts of a procedure in which particular sequence of steps has to be
carried out for achieving a goal, e.g., the algorithm for decoding a message.
5. Temporal Cohesion: When a module includes functions that are associated by the fact
that all the methods must be executed in the same time, the module is said to exhibit
temporal cohesion.
6. Logical Cohesion: A module is said to be logically cohesive if all the elements of the
module perform a similar operation. For example Error handling, data input and data
output, etc.
7. Coincidental Cohesion: A module is said to have coincidental cohesion if it performs a
set of tasks that are associated with each other very loosely, if at all.

Differentiate between Coupling and Cohesion

Coupling Cohesion

Coupling is also called Inter-Module Binding. Cohesion is also called Intra-Module Binding.

Coupling shows the relationships between Cohesion shows the relationship within the module.
modules.

Coupling shows the Cohesion shows the module's relative functional strength.
relative independence between the modules.

While creating, you should aim for low coupling, While creating you should aim for high cohesion, i.e., a cohes
i.e., dependency among modules should be component/ module focuses on a single function (i.e., single-
less. mindedness) with little interaction with other modules of the

In coupling, modules are linked to the other In cohesion, the module focuses on a single thing.
modules.
Software Design Approaches
Here are two generic approaches for software designing:
Top Down Design
We know that a system is composed of more than one sub-systems and it contains a number
of components. Further, these sub-systems and components may have their on set of sub-
system and components and creates hierarchical structure in the system.
Top-down design takes the whole software system as one entity and then decomposes it to
achieve more than one sub-system or component based on some characteristics. Each sub-
system or component is then treated as a system and decomposed further. This process
keeps on running until the lowest level of system in the top-down hierarchy is achieved.
Top-down design starts with a generalized model of system and keeps on defining the more
specific part of it. When all components are composed the whole system comes into
existence.
Top-down design is more suitable when the software solution needs to be designed from
scratch and specific details are unknown.
Bottom-up Design
The bottom up design model starts with most specific and basic components. It proceeds with
composing higher level of components by using basic or lower level components. It keeps
creating higher level components until the desired system is not evolved as one single
component. With each higher level, the amount of abstraction is increased.
Bottom-up strategy is more suitable when a system needs to be created from some existing
system, where the basic primitives can be used in the newer system.
Both, top-down and bottom-up approaches are not practical individually. Instead, a good
combination of both is used.
Function Oriented Design
Function Oriented design is a method to software design where the model is decomposed into a set of
interacting units or modules where each unit or module has a clearly defined function. Thus, the system
is designed from a functional viewpoint.

What is Structured Analysis?


Structured Analysis is a development method that allows the analyst to understand the system
and its activities in a logical way.
It is a systematic approach, which uses graphical tools that analyze and refine the objectives
of an existing system and develop a new system specification which can be easily
understandable by user.
It has following attributes −
• It is graphic which specifies the presentation of application.
• It divides the processes so that it gives a clear picture of system flow.
• It is logical rather than physical i.e., the elements of system do not depend on vendor
or hardware.
• It is an approach that works from high-level overviews to lower-level details.

Structured Analysis Tools


During Structured Analysis, various tools and techniques are used for system development.
They are −

• Data Flow Diagrams


• Data Dictionary
• Decision Trees
• Decision Tables
• Structured English
• Pseudocode
Design Notations
Design Notations are primarily meant to be used during the process of design and are used to represent
design or design decisions. For a function-oriented design, the design can be represented graphically
or mathematically by the following:

Data Flow Diagram


Data-flow design is concerned with designing a series of functional transformations that
convert system inputs into the required outputs. The design is described as data-flow diagrams.
These diagrams show how data flows through a system and how the output is derived from the
input through a series of functional transformations.

Data-flow diagrams are a useful and intuitive way of describing a system. They are generally
understandable without specialized training, notably if control information is excluded. They show end-
to-end processing. That is the flow of processing from when data enters the system to where it leaves
the system can be traced.

Data-flow design is an integral part of several design methods, and most CASE tools support data-flow
diagram creation. Different ways may use different icons to represent data-flow diagram entities, but
their meanings are similar.

The notation which is used is based on the following symbols:


Data Dictionary
A data dictionary is a structured repository of data elements in the system. It stores the
descriptions of all DFD data elements that is, details and definitions of data flows, data stores,
data stored in data stores, and the processes.
A data dictionary improves the communication between the analyst and the user. It plays an
important role in building a database. Most DBMSs have a data dictionary as a standard
feature. For example, refer the following table −

Sr.No. Data Name Description No. of Characters

1 ISBN ISBN Number 10

2 TITLE title 60

3 SUB Book Subjects 80

4 ANAME Author Name 15

Decision Trees
Decision trees are a method for defining complex relationships by describing decisions and
avoiding the problems in communication. A decision tree is a diagram that shows alternative
actions and conditions within horizontal tree framework. Thus, it depicts which conditions to
consider first, second, and so on.
Decision trees depict the relationship of each condition and their permissible actions. A square
node indicates an action and a circle indicates a condition. It forces analysts to consider the
sequence of decisions and identifies the actual decision that must be made.

The major limitation of a decision tree is that it lacks information in its format to describe what
other combinations of conditions you can take for testing. It is a single representation of the
relationships between conditions and actions.
For example, refer the following decision tree −

Decision Tables
Decision tables are a method of describing the complex logical relationship in a precise
manner which is easily understandable.
• It is useful in situations where the resulting actions depend on the occurrence of one or
several combinations of independent conditions.
• It is a matrix containing row or columns for defining a problem and the actions.
Components of a Decision Table

• Condition Stub − It is in the upper left quadrant which lists all the condition to be
checked.
• Action Stub − It is in the lower left quadrant which outlines all the action to be carried
out to meet such condition.
• Condition Entry − It is in upper right quadrant which provides answers to questions
asked in condition stub quadrant.
• Action Entry − It is in lower right quadrant which indicates the appropriate action
resulting from the answers to the conditions in the condition entry quadrant.
The entries in decision table are given by Decision Rules which define the relationships
between combinations of conditions and courses of action. In rules section,

• Y shows the existence of a condition.


• N represents the condition, which is not satisfied.
• A blank - against action states it is to be ignored.
• X (or a check mark will do) against action states it is to be carried out.
For example, refer the following table −

CONDITIONS Rule 1 Rule 2 Rule 3 Rule 4

Advance payment made Y N N N

Purchase amount = Rs 10,000/- - Y Y N

Regular Customer - Y N -

ACTIONS

Give 5% discount X X - -

Give no discount - - X X

Structured Charts
It partitions a system into block boxes. A Black box system that functionality is known to the user without the
knowledge of internal design.
Structured Chart is a graphical representation which shows:

o System partitions into modules


o Hierarchy of component modules
o The relation between processing modules
o Interaction between modules
o Information passed between modules

User Interface Design


The visual part of a computer application or operating system through which a client interacts with a computer or
software. It determines how commands are given to the computer or the program and how data is displayed on
the screen.

Types of User Interface


There are two main types of User Interface:

o Text-Based User Interface or Command Line Interface


o Graphical User Interface (GUI)

Text-Based User Interface: This method relies primarily on the keyboard. A typical example of this is UNIX.

Advantages
o Many and easier to customizations options.
o Typically capable of more important tasks.

Disadvantages
o Relies heavily on recall rather than recognition.
o Navigation is often more difficult.

Graphical User Interface (GUI): GUI relies much more heavily on the mouse. A typical example of this type of
interface is any versions of the Windows operating systems.

GUI Characteristics

Characteristics Descriptions

Windows Multiple windows allow different information to be displayed simultaneously on the


user's screen.

Icons Icons different types of information. On some systems, icons represent files. On other
icons describes processes.

Menus Commands are selected from a menu rather than typed in a command language.

Pointing A pointing device such as a mouse is used for selecting choices from a menu or
indicating items of interests in a window.

Graphics Graphics elements can be mixed with text or the same display.

Advantages
o Less expert knowledge is required to use it.
o Easier to Navigate and can look through folders quickly in a guess and check manner.
o The user may switch quickly from one task to another and can interact with several different applications.

Disadvantages
o Typically decreased options.
o Usually less customizable. Not easy to use one button for tons of different variations.

MODULE4
Coding Standards and Guidelines

Different modules specified in the design document are coded in the Coding phase according
to the module specification. The main goal of the coding phase is to code from the design
document prepared after the design phase through a high-level language and then to unit test
this code.
Good software development organizations want their programmers to maintain to some well-
defined and standard style of coding called coding standards. They usually make their own
coding standards and guidelines depending on what suits their organization best and based
on the types of software they develop. It is very important for the programmers to maintain the
coding standards otherwise the code will be rejected during code review.
Purpose of Having Coding Standards:
• A coding standard gives a uniform appearance to the codes written by different
engineers.
• It improves readability, and maintainability of the code and it reduces complexity also.
• It helps in code reuse and helps to detect error easily.
• It promotes sound programming practices and increases efficiency of the programmers.
Some of the coding standards are given below:
1. Limited use of globals:
These rules tell about which types of data that can be declared global and the data that
can’t be.

2. Standard headers for different modules:


For better understanding and maintenance of the code, the header of different
Amodules should follow some standard format and information. The header format must
contain below things that is being used in various companies:
• Name of the module
• Date of module creation
• Author of the module
• Modification history
• Synopsis of the module about what the module does
• Different functions supported in the module along with their input output
parameters
• Global variables accessed or modified by the module

3. Naming conventions for local variables, global variables, constants and


functions:
Some of the naming conventions are given below:
• Meaningful and understandable variables name helps anyone to understand the
reason of using it.
• Local variables should be named using camel case lettering starting with small
letter (e.g. localData) whereas Global variables names should start with a capital
letter (e.g. GlobalData). Constant names should be formed using capital letters
only (e.g. CONSDATA).
• It is better to avoid the use of digits in variable names.
• The names of the function should be written in camel case starting with small
letters.
• The name of the function must describe the reason of using the function clearly
and briefly.
4. Indentation:
Proper indentation is very important to increase the readability of the code. For making
the code readable, programmers should use White spaces properly. Some of the
spacing conventions are given below:
• There must be a space after giving a comma between two function arguments.
• Each nested block should be properly indented and spaced.
• Proper Indentation should be there at the beginning and at the end of each block in
the program.
• All braces should start from a new line and the code following the end of braces
also start from a new line.

5. Error return values and exception handling conventions:


All functions that encountering an error condition should either return a 0 or 1 for
simplifying the debugging.

On the other hand, Coding guidelines give some general suggestions regarding the
coding style that to be followed for the betterment of understandability and readability of
the code. Some of the coding guidelines are given below :

6. Avoid using a coding style that is too difficult to understand:


Code should be easily understandable. The complex code makes maintenance and
debugging difficult and expensive.

7. Avoid using an identifier for multiple purposes:


Each variable should be given a descriptive and meaningful name indicating the reason
behind using it. This is not possible if an identifier is used for multiple purposes and thus
it can lead to confusion to the reader. Moreover, it leads to more difficulty during future
enhancements.

8. Code should be well documented:


The code should be properly commented for understanding easily. Comments
regarding the statements increase the understandability of the code.

9. Length of functions should not be very large:


Lengthy functions are very difficult to understand. That’s why functions should be small
enough to carry out small work and lengthy functions should be broken into small ones
for completing small tasks.

10. Try not to use GOTO statement:


GOTO statement makes the program unstructured, thus it reduces the understandability
of the program and also debugging becomes difficult.

Advantages of Coding Guidelines:


• Coding guidelines increase the efficiency of the software and reduces the development
time.
• Coding guidelines help in detecting errors in the early phases, so it helps to reduce the
extra cost incurred by the software project.
• If coding guidelines are maintained properly, then the software code increases
readability and understand ability thus it reduces the complexity of the code.
• It reduces the hidden cost for developing the software.

Software Documentation

Any written text, illustrations or video that describe a software or program to its users is
called program or software document. User can be anyone from a programmer, system
analyst and administrator to end user. At various stages of development multiple documents
may be created for different users. In fact, software documentation is a critical process in
the overall software development process.
In modular programming documentation becomes even more important because different
modules of the software are developed by different teams. If anyone other than the
development team wants to or needs to understand a module, good and detailed
documentation will make the task easier.
These are some guidelines for creating the documents −
• Documentation should be from the point of view of the reader
• Document should be unambiguous
• There should be no repetition
• Industry standards should be used
• Documents should always be updated
• Any outdated document should be phased out after due recording of the phase out
Advantages of Documentation
These are some of the advantages of providing program documentation −
• Keeps track of all parts of a software or program
• Maintenance is easier
• Programmers other than the developer can understand all aspects of software
• Improves overall quality of the software
• Assists in user training
• Ensures knowledge de-centralization, cutting costs and effort if people leave the
system abruptly
Example Documents
A software can have many types of documents associated with it. Some of the important ones
include −
• User manual − It describes instructions and procedures for end users to use the
different features of the software.
• Operational manual − It lists and describes all the operations being carried out and
their inter-dependencies.
• Design Document − It gives an overview of the software and describes design
elements in detail. It documents details like data flow diagrams, entity relationship
diagrams, etc.
• Requirements Document − It has a list of all the requirements of the system as well
as an analysis of viability of the requirements. It can have user cases, reallife scenarios,
etc.
• Technical Documentation − It is a documentation of actual programming components
like algorithms, flowcharts, program codes, functional modules, etc.
• Testing Document − It records test plan, test cases, validation plan, verification plan,
test results, etc. Testing is one phase of software development that needs intensive
documentation.
• List of Known Bugs − Every software has bugs or errors that cannot be removed
because either they were discovered very late or are harmless or will take more effort
and time than necessary to rectify. These bugs are listed with program documentation
so that they may be removed at a later date. Also they help the users, implementers
and maintenance people if the bug is activated.

Software Testing

Software testing can be stated as the process of verifying and validating that a software or
application is bug free, meets the technical requirements as guided by it’s design and
development and meets the user requirements effectively and efficiently with handling all the
exceptional and boundary cases.
The process of software testing aims not only at finding faults in the existing software but
also at finding measures to improve the software in terms of efficiency, accuracy and
usability. It mainly aims at measuring specification, functionality and performance of a
software program or application.
Software testing can be divided into two steps:
1. Verification: it refers to the set of tasks that ensure that software correctly implements a
specific function.
2. Validation: it refers to a different set of tasks that ensure that the software that has been
built is traceable to customer requirements.
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”

What are different types of software testing?


Software Testing can be broadly classified into two types:
1. Manual Testing: Manual testing includes testing a software manually, i.e., without using
any automated tool or any script. In this type, the tester takes over the role of an end-user
and tests the software to identify any unexpected behavior or bug. There are different stages
for manual testing such as unit testing, integration testing, system testing, and user
acceptance testing.
Testers use test plans, test cases, or test scenarios to test a software to ensure the
completeness of testing. Manual testing also includes exploratory testing, as testers explore
the software to identify errors in it.
2. Automation Testing: Automation testing, which is also known as Test Automation, is
when the tester writes scripts and uses another software to test the product. This process
involves automation of a manual process. Automation Testing is used to re-run the test
scenarios that were performed manually, quickly, and repeatedly.
Apart from regression testing, automation testing is also used to test the application from
load, performance, and stress point of view. It increases the test coverage, improves
accuracy, and saves time and money in comparison to manual testing.
What are different techniques of Software Testing?
Software techniques can be majorly classified into two categories:
1. Black Box Testing: The technique of testing in which the tester doesn’t have access to
the source code of the software and is conducted at the software interface without
concerning with the internal logical structure of the software is known as black box testing.
2. White-Box Testing: The technique of testing in which the tester is aware of the internal
workings of the product, have access to it’s source code and is conducted by making sure
that all internal operations are performed according to the specifications is known as white
box testing.
BLACK BOX TESTING WHITE BOX TESTING

Internal workings of an application are not

required. Knowledge of the internal workings is must.

Also known as closed box/data driven

testing. Also known as clear box/structural testing.

End users, testers and developers. Normally done by testers and developers.

THis can only be done by trial and error Data domains and internal boundaries can be

method. better tested.

What are different levels of software testing?


Software level testing can be majorly classified into 4 levels:
1. Unit Testing: A level of the software testing process where individual units/components of
a software/system are tested. The purpose is to validate that each unit of the software
performs as designed.
2. Integration Testing: A level of the software testing process where individual units are
combined and tested as a group. The purpose of this level of testing is to expose faults in the
interaction between integrated units.
3. System Testing: A level of the software testing process where a complete, integrated
system/software is tested. The purpose of this test is to evaluate the system’s compliance
with the specified requirements.
4. Acceptance Testing: A level of the software testing process where a system is tested for
acceptability. The purpose of this test is to evaluate the system’s compliance with the
business requirements and assess whether it is acceptable for delivery.
Differences between Black Box Testing vs White Box Testing

Software Testing can be majorly classified into two categories:

1. Black Box Testing is a software testing method in which the internal structure/ design/
implementation of the item being tested is not known to the tester

2. White Box Testing is a software testing method in which the internal structure/ design/
implementation of the item being tested is known to the tester.

Differences between Black Box Testing vs White Box Testing:

BLACK BOX TESTING WHITE BOX TESTING

It is a way of software testing in which It is a way of testing the software in

the internal structure or the program or which the tester has knowledge about

the code is hidden and nothing is the internal structure or the code or the

known about it. program of the software.

It is mostly done by software

It is mostly done by software testers. developers.

No knowledge of implementation is Knowledge of implementation is

needed. required.

It can be referred as outer or external It is the inner or the internal software


BLACK BOX TESTING WHITE BOX TESTING

software testing. testing.

It is functional test of the software. It is structural test of the software.

This testing can be initiated on the

basis of requirement specifications This type of testing of software is

document. started after detail design document.

No knowledge of programming is It is mandatory to have knowledge of

required. programming.

It is the behavior testing of the

software. It is the logic testing of the software.

It is applicable to the higher levels of It is generally applicable to the lower

testing of software. levels of software testing.

It is also called closed testing. It is also called as clear box testing.

It is least time consuming. It is most time consuming.

It is not suitable or preferred for

algorithm testing. It is suitable for algorithm testing.

Data domains along with inner or

Can be done by trial and error ways internal boundaries can be better

and methods. tested.

Example: search something on google Example: by input to check and verify

by using keywords loops


BLACK BOX TESTING WHITE BOX TESTING

Types of Black Box Testing:

• A. Functional Testing

• B. Non-functional testing

• C. Regression Testing

Types of White Box Testing:

• A. Path Testing

• B. Loop Testing

• C. Condition testing

Black box testing


Black box testing is a type of software testing in which the functionality of the software is not
known. The testing is done without the internal knowledge of the products.
Black box testing can be done in following ways:
1. Syntax Driven Testing – This type of testing is applied to systems that can be
syntactically represented by some language. For example- compilers,language that can be
represented by context free grammar. In this, the test cases are generated so that each
grammar rule is used at least once.
2. Equivalence partitioning – It is often seen that many type of inputs work similarly so
instead of giving all of them separately we can group them together and test only one input of
each group. The idea is to partition the input domain of the system into a number of
equivalence classes such that each member of class works in a similar way, i.e., if a test
case in one class results in some error, other members of class would also result into same
error.
The technique involves two steps:
1. Identification of equivalence class – Partition any input domain into minimum two
sets: valid values and invalid values. For example, if the valid range is 0 to 100 then
select one valid input like 49 and one invalid like 104.
2. Generating test cases –
(i) To each valid and invalid class of input assign unique identification number.
(ii) Write test case covering all valid and invalid test case considering that no two invalid
inputs mask each other.

To calculate the square root of a number, the equivalence classes will be:
(a) Valid inputs:
• Whole number which is a perfect square- output will be an integer.
• Whole number which is not a perfect square- output will be decimal number.
• Positive decimals
(b) Invalid inputs:
• Negative numbers(integer or decimal).
• Characters other that numbers like “a”,”!”,”;”,etc.
3. Boundary value analysis – Boundaries are very good places for errors to occur. Hence if
test cases are designed for boundary values of input domain then the efficiency of testing
improves and probability of finding errors also increase. For example – If valid range is 10 to
100 then test for 10,100 also apart from valid and invalid inputs

White box Testing


White box testing techniques analyze the internal structures the used data structures,
internal design, code structure and the working of the software rather than just the
functionality as in black box testing. It is also called glass box testing or clear box testing or
structural testing.
Working process of white box testing:
• Input: Requirements, Functional specifications, design documents, source code.
• Processing: Performing risk analysis for guiding through the entire process.
• Proper test planning: Designing test cases so as to cover entire code. Execute rinse-
repeat until error-free software is reached. Also, the results are communicated.
• Output: Preparing final report of the entire testing process.
Testing techniques:
• Statement coverage: In this technique, the aim is to traverse all statement at least
once. Hence, each line of code is tested. In case of a flowchart, every node must be
traversed at least once. Since all lines of code are covered, helps in pointing out faulty
code.

Statement Coverage Example

• Branch Coverge: In this technique, test cases are designed so that each branch from
all decision points are traversed at least once. In a flowchart, all edges must be
traversed at least once.
4 test cases required such that all branches of all decisions are covered, i.e, all edges

of flowchart are covered

• Condition Coverage: In this technique, all individual conditions must be covered as


shown in the following example:

1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions
get TRUE and FALSE as their values. One possible example would be:
• #TC1 – X = 0, Y = 55
• #TC2 – X = 5, Y = 0
• Basis Path Testing: In this technique, control flow graphs are made from code or
flowchart and then Cyclomatic complexity is calculated which defines the number of
independent paths so that the minimal number of test cases can be designed for each
independent path.
Steps:
1. Make the corresponding control flow graph
2. Calculate the cyclomatic complexity
3. Find the independent paths
4. Design test cases corresponding to each independent path
Flow graph notation: It is a directed graph consisting of nodes and edges. Each node
represents a sequence of statements, or a decision point. A predicate node is the one
that represents a decision point that contains a condition after which the graph splits.
Regions are bounded by nodes and edges.

Cyclomatic Complexity: It is a measure of the logical complexity of the software and is


used to define the number of independent paths. For a graph G, V(G) is its cyclomatic
complexity.
Calculating V(G):
5. V(G) = P + 1, where P is the number of predicate nodes in the flow graph
6. V(G) = E – N + 2, where E is the number of edges and N is the total number of
nodes
7. V(G) = Number of non-overlapping regions in the graph
Example:

V(G) = 4 (Using any of the above formulae)


No of independent paths = 4
• #P1: 1 – 2 – 4 – 7 – 8
• #P2: 1 – 2 – 3 – 5 – 7 – 8
• #P3: 1 – 2 – 3 – 6 – 7 – 8
• #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
Loop Testing: Loops are widely used and these are fundamental to many algorithms
hence, their testing is very important. Errors often occur at the beginnings and ends of
loops.
0. Simple loops: For simple loops of size n, test cases are designed that:
• Skip the loop entirely
• Only one pass through the loop
• 2 passes
• m passes, where m < n
• n-1 ans n+1 passes
1. Nested loops: For nested loops, all the loops are set to their minimum count and
we start from the innermost loop. Simple loop tests are conducted for the
innermost loop and this is worked outwards till all the loops have been tested.
2. Concatenated loops: Independent loops, one after another. Simple loop tests are
applied for each.
If they’re not independent, treat them like nesting.
Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines of
code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box
testing.
4. Easy to automate.
Disadvantages:
1. Main disadvantage is that it is very expensive.
2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming
language as opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.

Debugging
Introduction:
In the context of software engineering, debugging is the process of fixing a bug in the
software. In other words, it refers to identifying, analyzing and removing errors. This activity
begins after the software fails to execute properly and concludes by solving the problem and
successfully testing the software. It is considered to be an extremely complex and tedious
task because errors need to be resolved at all stages of debugging.
Difference Between Debugging and Testing:
Debugging is different from testing. Testing focuses on finding bugs, errors, etc whereas
debugging starts after a bug has been identified in the software. Testing is used to ensure
that the program is correct and it was supposed to do with a certain minimum success rate.
Testing can be manual or automated. There are several different types of testing like unit
testing, integration testing, alpha and beta testing, etc.
Debugging requires a lot of knowledge, skills, and expertise. It can be supported by some
automated tools available but is more of a manual process as every bug is different and
requires a different technique, unlike a pre-defined testing mechanism.

Integration Testing
, https://www.geeksforgeeks.org/types-software-testing/
Integration testing is the process of testing the interface between two software units or
module. It’s focus on determining the correctness of the interface. The purpose of the
integration testing is to expose faults in the interaction between integrated units. Once all the
modules have been unit tested, integration testing is performed.
Integration test approaches –
There are four types of integration testing approaches. Those approaches are the following:
1. Big-Bang Integration Testing –
It is the simplest integration testing approach, where all the modules are combining and
verifying the functionality after the completion of individual module testing. In simple words, all
the modules of the system are simply put together and tested. This approach is practicable
only for very small systems. If once an error is found during the integration testing, it is very
difficult to localize the error as the error may potentially belong to any of the modules being
integrated. So, debugging errors reported during big bang integration testing are very
expensive to fix.
Advantages:
• It is convenient for small systems.
Disadvantages:
• There will be quite a lot of delay because you would have to wait for all the modules to
be integrated.
• High risk critical modules are not isolated and tested on priority since all modules are
tested at once.
2. Bottom-Up Integration Testing –
In bottom-up testing, each module at lower levels is tested with higher modules until all
modules are tested. The primary purpose of this integration testing is, each subsystem is to
test the interfaces among various modules making up the subsystem. This integration testing
uses test drivers to drive and pass appropriate data to the lower level modules.

Advantages:
• In bottom-up testing, no stubs are required.
• A principle advantage of this integration testing is that several disjoint subsystems can
be tested simultaneously.
Disadvantages:
• Driver modules must be produced.
• In this testing, the complexity that occurs when the system is made up of a large
number of small subsystem.
3. Top-Down Integration Testing –
Top-down integration testing technique used in order to simulate the behaviour of the lower-
level modules that are not yet integrated.In this integration testing, testing takes place from
top to bottom. First high-level modules are tested and then low-level modules and finally
integrating the low-level modules to a high level to ensure the system is working as intended.
Advantages:
• Separately debugged module.
• Few or no drivers needed.
• It is more stable and accurate at the aggregate level.
Disadvantages:
• Needs many Stubs.
• Modules at lower level are tested inadequately.
4. Mixed Integration Testing –
A mixed integration testing is also called sandwiched integration testing. A mixed integration
testing follows a combination of top down and bottom-up testing approaches. In top-down
approach, testing can start only after the top-level module have been coded and unit tested.
In bottom-up approach, testing can start only after the bottom level modules are ready. This
sandwich or mixed approach overcomes this shortcoming of the top-down and bottom-up
approaches. A mixed integration testing is also called sandwiched integration testing.
Advantages:
• Mixed approach is useful for very large projects having several sub projects.
• This Sandwich approach overcomes this shortcoming of the top-down and bottom-up
approaches.
Disadvantages:
• For mixed integration testing, require very high cost because one part has Top-down
approach while another part has bottom-up approach.
• This integration testing cannot be used for smaller system with huge interdependence
between different modules.

System Testing

System Testing is a type of software testing that is performed on a complete integrated


system to evaluate the compliance of the system with the corresponding requirements.
System Testing is carried out on the whole system in the context of either system
requirement specifications or functional requirement specifications or in the context of both.
System testing tests the design and behavior of the system and also the expectations of the
customer. It is performed to test the system beyond the bounds mentioned in the software
requirements specification (SRS).
System Testing is basically performed by a testing team that is independent of the
development team that helps to test the quality of the system impartial. It has both functional
and non-functional testing.
System Testing is a black-box testing.
System Testing Process:
System Testing is performed in the following steps:
• Test Environment Setup:
Create testing environment for the better quality testing.
• Create Test Case:
Generate test case for the testing process.
• Create Test Data:
Generate the data that is to be tested.
• Execute Test Case:
After the generation of the test case and the test data, test cases are executed.
• Defect Reporting:
Defects in the system are detected.
• Regression Testing:
It is carried out to test the side effects of the testing process.
• Log Defects:
Defects are fixed in this step.
• Retest:
If the test is not successful then again test is performed.
Types of System Testing:
• Performance Testing:
Performance Testing is a type of software testing that is carried out to test the speed,
scalability, stability and reliability of the software product or application.
• Load Testing:
Load Testing is a type of software Testing which is carried out to determine the
behavior of a system or software product under extreme load.
• Stress Testing:
Stress Testing is a type of software testing performed to check the robustness of the
system under the varying loads.
• Scalability Testing:
Scalability Testing is a type of software testing which is carried out to check the
performance of a software application or system in terms of its capability to scale up or
scale down the number of user request load.

Software Reliability means Operational reliability. It is described as the ability of a system


or component to perform its required functions under static conditions for a specific period.

Software reliability is also defined as the probability that a software system fulfills its
assigned task in a given environment for a predefined number of input cases, assuming that
the hardware and the input are free of error.

Software Reliability is an essential connect of software quality, composed with functionality,


usability, performance, serviceability, capability, installability, maintainability, and
documentation. Software Reliability is hard to achieve because the complexity of software
turn to be high. While any system with a high degree of complexity, containing software, will
be hard to reach a certain level of reliability, system developers tend to push complexity into
the software layer, with the speedy growth of system size and ease of doing so by upgrading
the software.

Software quality product is defined in term of its fitness of purpose. That is, a quality
product does precisely what the users want it to do. For software products, the fitness of use
is generally explained in terms of satisfaction of the requirements laid down in the SRS
document.

The modern view of a quality associated with a software product several quality methods
such as the following:

Portability: A software device is said to be portable, if it can be freely made to work in


various operating system environments, in multiple machines, with other software products,
etc.

Usability: A software product has better usability if various categories of users can easily
invoke the functions of the product.

Reusability: A software product has excellent reusability if different modules of the product
can quickly be reused to develop new products.

Correctness: A software product is correct if various requirements as specified in the SRS


document have been correctly implemented.

Maintainability: A software product is maintainable if bugs can be easily corrected as and


when they show up, new tasks can be easily added to the product, and the functionalities of
the product can be easily modified, etc.

Software Maintenance

Software Maintenance is the process of modifying a software product after it has been
delivered to the customer. The main purpose of software maintenance is to modify and
update software application after delivery to correct faults and to improve performance.
Need for Maintenance –
Software Maintenance must be performed in order to:
• Correct faults.
• Improve the design.
• Implement enhancements.
• Interface with other systems.
• Accommodate programs so that different hardware, software, system features, and
telecommunications facilities can be used.
• Migrate legacy software.
• Retire software.
Categories of Software Maintenance –
Maintenance can be divided into the following:
1. Corrective maintenance:
Corrective maintenance of a software product may be essential either to rectify some
bugs observed while the system is in use, or to enhance the performance of the system.
2. Adaptive maintenance:
This includes modifications and updations when the customers need the product to run
on new platforms, on new operating systems, or when they need the product to
interface with new hardware and software.
3. Perfective maintenance:
A software product needs maintenance to support the new features that the users want
or to change different types of functionalities of the system according to the customer
demands.
4. Preventive maintenance:
This type of maintenance includes modifications and updations to prevent future
problems of the software. It goals to attend problems, which are not significant at this
moment but may cause serious issues in future.
Reverse Engineering –
Reverse Engineering is processes of extracting knowledge or design information from
anything man-made and reproducing it based on extracted information. It is also called back
Engineering.
Software Reverse Engineering –
Software Reverse Engineering is the process of recovering the design and the requirements
specification of a product from an analysis of it’s code. Reverse Engineering is becoming
important, since several existing software products, lack proper documentation, are highly
unstructured, or their structure has degraded through a series of maintenance efforts.

You might also like