Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Notes of Unit - V (SE, KCA-302) MCA III Sem-1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 55

Notes Of Unit - V (SE , KCA-302) MCA III Sem

Software Maintenance:- Software Maintenance is the process of modifying


a software product after it has been delivered to the customer. The main
purpose of software maintenance is to modify and update software
applications after delivery to correct faults and to improve performance.
Software maintenance is also an important part of the Software Development
Life Cycle(SDLC). To update the software application and do all
modifications in software application so as to improve performance is the
main focus of software maintenance. Software is a model that run on the
basis of real world. so, whenever any change requires in the software that
means the need of real world changes wherever possible.

Need for Maintenance –

Software Maintenance must be performed in order to:

● Correct faults.
● Improve the design.
● Implement enhancements.
● Interface with other systems.
● Accommodate programs so that different hardware, software, system
features, and telecommunications facilities can be used.
● Migrate legacy software.
● Retire software.
● Requirement of user changes.
● Run the code fast

Challenges in Software Maintenance:

The various challenges in software maintenance are given below:

● The popular age of any software program is taken into consideration up


to ten to fifteen years. As software program renovation is open ended
and might maintain for decades making it very expensive.
● Older software program’s, which had been intended to paintings on
sluggish machines with much less reminiscence and garage ability can
not maintain themselves tough in opposition to newly coming more
advantageous software program on contemporary-day hardware.
● Changes are frequently left undocumented which can also additionally
reason greater conflicts in future.
● As era advances, it turns into high priced to preserve vintage software
program.
● Often adjustments made can without problems harm the authentic
shape of the software program, making it difficult for any next
adjustments.
● There is lack of Code Comments.
Categories of Software Maintenance –

Maintenance can be divided into the following:

1. Corrective maintenance:
Corrective maintenance of a software product may be essential either
to rectify some bugs observed while the system is in use, or to enhance
the performance of the system.

2. Adaptive maintenance:
This includes modifications and updations when the customers need
the product to run on new platforms, on new operating systems, or
when they need the product to interface with new hardware and
software.

3. Perfective maintenance:
A software product needs maintenance to support the new features that
the users want or to change different types of functionalities of the
system according to the customer demands.
4. Preventive maintenance:
This type of maintenance includes modifications and updations to
prevent future problems of the software. It goals to attend problems,
which are not significant at this moment but may cause serious issues
in future.

Reverse Engineering –

Reverse Engineering is processes of extracting knowledge or design information


from anything man-made and reproducing it based on extracted information. It is
also called back Engineering. The main objective of reverse-engineering is to
check out how the system works. There are many reasons to perform reverse
engineering. Reverse engineering is used to know how the thing works. Also,
reverse engineering is to recreate the object by adding some enhancements.

Software Reverse Engineering –

Software Reverse Engineering is the process of recovering the design and the
requirements specification of a product from an analysis of it’s code. Reverse
Engineering is becoming important, since several existing software products, lack
proper documentation, are highly unstructured, or their structure has degraded
through a series of maintenance efforts.

Why Reverse Engineering?

● Providing proper system documentation.


● Recovery of lost information.
● Assisting with maintenance.
● Facility of software reuse.
● Discovering unexpected flaws or faults.
● Implements the innovative processes for specific use.
● Easy to document the things how the efficiency and power can be
improved.

Uses of Software Reverse Engineering –

● Software Reverse Engineering is used in software design, reverse


engineering enables the developer or programmer to add new features
to the existing software with or without knowing the source code.
● Reverse engineering is also useful in software testing, it helps the testers
to study or detect the virus and other malware code .
Software Maintenance Cost Factors
There are two types of cost factors involved in software maintenance.

These are

● Non-Technical Factors

● Technical Factors

Non-Technical Factors

1. Application Domain
● If the application of the program is defined and well understood, the system
requirements may be definitive and maintenance due to changing needs
minimized.

● If the form is entirely new, it is likely that the initial conditions will be modified
frequently, as user gain experience with the system.

2. Staff Stability

● It is simple for the original writer of a program to understand and change an


application rather than some other person who must understand the program by
the study of the reports and code listing.

● If the implementation of a system also maintains that systems, maintenance


costs will reduce.

● In practice, the feature of the programming profession is such that persons


change jobs regularly. It is unusual for one user to develop and maintain an
application throughout its useful life.

3. Program Lifetime

● Programs become obsolete when the program becomes obsolete, or their


original hardware is replaced, and conversion costs exceed rewriting costs.

4. Dependence on External Environment

● If an application is dependent on its external environment, it must be modified as


the climate changes.

● For example:

● Changes in a taxation system might need payroll, accounting, and stock control
programs to be modified.
● Taxation changes are nearly frequent, and maintenance costs for these
programs are associated with the frequency of these changes.

● A program used in mathematical applications does not typically depend on


humans changing the assumptions on which the program is based.

5. Hardware Stability

● If an application is designed to operate on a specific hardware configuration and


that configuration does not changes during the program's lifetime, no
maintenance costs due to hardware changes will be incurred.

● Hardware developments are so increased that this situation is rare.

● The application must be changed to use new hardware that replaces obsolete
equipment.

Technical Factors
Technical Factors include the following:
Module Independence

It should be possible to change one program unit of a system without affecting any other
unit.

Programming Language

Programs written in a high-level programming language are generally easier to


understand than programs written in a low-level language.

Programming Style

The method in which a program is written contributes to its understandability and hence,
the ease with which it can be modified.

Program Validation and Testing


● Generally, more the time and effort are spent on design validation and program
testing, the fewer bugs in the program and, consequently, maintenance costs
resulting from bugs correction are lower.

● Maintenance costs due to bug's correction are governed by the type of fault to be
repaired.

● Coding errors are generally relatively cheap to correct, design errors are more
expensive as they may include the rewriting of one or more program units.

● Bugs in the software requirements are usually the most expensive to correct
because of the drastic design which is generally involved.

Documentation

● If a program is supported by clear, complete yet concise documentation, the


functions of understanding the application can be associatively straight-forward.

● Program maintenance costs tends to be less for well-reported systems than for
the system supplied with inadequate or incomplete documentation.

Configuration Management Techniques

● One of the essential costs of maintenance is keeping track of all system


documents and ensuring that these are kept consistent.

● Effective configuration management can help control these costs.

Software Re-engineering is a process of software development which is


done to improve the maintainability of a software system. Re-engineering is
the examination and alteration of a system to reconstitute it in a new form.
This process encompasses a combination of sub-processes like reverse
engineering, forward engineering, reconstructing etc.
Re-engineering is the reorganizing and modifying existing software systems
to make them more maintainable.

Objectives of Re-engineering:

● To describe a cost-effective option for system evolution.


● To describe the activities involved in the software maintenance
process.
● To distinguish between software and data re-engineering and to
explain the problems of data re-engineering.

Steps involved in Re-engineering:

1. Inventory Analysis
2. Document Reconstruction
3. Reverse Engineering
4. Code Reconstruction
5. Data Reconstruction
6. Forward Engineering

Diagrammatic Representation:
Re-engineering Cost Factors:

● The quality of the software to be re-engineered


● The tool support available for re-engineering
● The extent of the required data conversion
● The availability of expert staff for re-engineering

Advantages of Re-engineering:
● Reduced Risk: As the software is already existing, the risk is less
as compared to new software development. Development
problems, staffing problems and specification problems are the
lots of problems which may arise in new software development.
● Reduced Cost: The cost of re-engineering is less than the costs of
developing new software.
● Revelation of Business Rules: As a system is re-engineered ,
business rules that are embedded in the system are rediscovered.
● Better use of Existing Staff: Existing staff expertise can be
maintained and extended to accommodate new skills during re-
engineering.

Disadvantages of Re-engineering:

● Practical limits to the extent of re-engineering.


● Major architectural changes or radical reorganizing of the systems
data management has to be done manually.
● Re-engineered system is not likely to be as maintainable as a new
system developed using modern software Re-engineering
methods.

Software Configuration Management


When we develop software, the product (software) undergoes many changes in their
maintenance phase; we need to handle these changes effectively.
Several individuals (programs) work together to achieve these common goals. This
individual produces several work products (SC Items) e.g., Intermediate version of
modules or test data used during debugging, parts of the final product.

The elements that comprise all information produced as a part of the software process
are collectively called a software configuration.

As software development progresses, the number of Software Configuration elements


(SCI's) grow rapidly.

These are handled and controlled by SCM. This is where we require software
configuration management.

A configuration of the product refers not only to the product's constituent


but also to a particular version of the component.

Therefore, SCM is the discipline which

● Identify change

● Monitor and control change

● Ensure the proper implementation of change made to the item.

● Auditing and reporting on the change made.

Configuration Management (CM) is a technic of identifying, organizing, and


controlling modification to software being built by a programming team.

The objective is to maximize productivity by minimizing mistakes (errors).

CM is used to essential due to the inventory management, library


management, and updation management of the items essential for the
project.
Why do we need Configuration Management?
Multiple people are working on software which is consistently updating. It
may be a method where multiple version, branches, authors are involved in
a software project, and the team is geographically distributed and works
concurrently. It changes in user requirements, and policy, budget,
schedules need to be accommodated.

Importance of SCM
It is practical in controlling and managing the access to various SCIs e.g., by
preventing the two members of a team for checking out the same component for
modification at the same time.

It provides the tool to ensure that changes are being properly implemented.

It has the capability of describing and storing the various constituents of software.

SCM is used in keeping a system in a consistent state by automatically producing


derived version upon modification of the same component.

Change Control Process

What is Change Control?


Change Control is the process that a company uses to document,
identify and authorize changes to an IT environment. It reduces the
chances of unauthorized alterations, disruption and errors in the
system.
Why Change Control?
Whenever any new or different changes are requested for the system,
especially by stakeholders, it is neither optional nor ignorable. It has to
be implemented without affecting other components of the system.
This is when the change control comes handy. It helps project teams
to modify the scope of the project using specified controls and policies.
Change Control is practiced whenever a project is not progressing as
planned.
It is mandatory that a formal document for change request is completed
and reviewed in order to keep control of change requests.
Number of question one might encounter while analyzing Change
Control like
● Who will approve the change?
● Does it require to run through a change control board?
● How much time will be required to research and implement the
change?
● What are the impacts of changes to other components of the
system (schedules, cost, resources, etc.)?
● Is there any threshold under which the project management can
approve it?

Different factors of Change Control process


There are various factors that a Change Control process should
consider

Steps in Change Action taken in Change Control


Control Process

● Change ● Request for changes should be


request standardized and subject to management
initiation and review
Control ● Change requestor should be kept
informed

● Impact ● Make sure that all requests for change


Assessment are assessed in a structured way for
analyzing possible impacts

● Control and ● A change log should be maintained that


Documentatio tells the date, person details who made
n of Changes changes and changes implemented
● Only authorized individual should be
able to make changes
● A process for rolling back to the
previous version should be identified
● Documentatio ● Whenever system changes are
n and implemented the procedures and
Procedures associated document should update
accordingly

● Authorized ● System access right should be


Maintenance controlled to avert unauthorized access

● Testing and ● Software should be thoroughly tested


User signoff

● Version ● Control should be placed on production


Control source code to make sure that only the
latest version is updated

● Emergency ● A verbal authorization should be


Changes obtained, and the change should be
documented as soon as possible

Process of Change Control


Before we look into what is involved in Change Control process, we will
get familiarize with what documents are used in Change Control. While
carrying out Change Control, there are mainly two documents involved
● Change Log: A change log is a document that list the details
about all the Change Requests like project number, PCR (project
change request) ID, priority, Owner details, Target date, status
and status date, raised by, date when raised etc.

Version Control Systems:-Version control systems are a

category of software tools that helps in recording changes made to files by


keeping a track of modifications done in the code.
Why Version Control system is so Important?

As we know that a software product is developed in collaboration by a group


of developers they might be located at different locations and each one of
them contributes to some specific kind of functionality/features. So in order
to contribute to the product, they made modifications to the source
code(either by adding or removing). A version control system is a kind of
software that helps the developer team to efficiently communicate and
manage(track) all the changes that have been made to the source code along
with the information like who made and what changes have been made. A
separate branch is created for every contributor who made the changes and
the changes aren’t merged into the original source code unless all are
analyzed as soon as the changes are green signaled they merged to the main
source code. It not only keeps source code organized but also improves
productivity by making the development process smooth.

Basically Version control system keeps track on changes made on a


particular software and take a snapshot of every modification. Let’s suppose
if a team of developer add some new functionalities in an application and the
updated version is not working properly so as the version control system
keeps track of our work so with the help of version control system we can
omit the new changes and continue with the previous version.

Benefits of the version control system:

● Enhances the project development speed by providing efficient


collaboration,
● Leverages the productivity, expedites product delivery, and skills
of the employees through better communication and assistance,
● Reduce possibilities of errors and conflicts meanwhile project
development through traceability to every small change,
● Employees or contributors of the project can contribute from
anywhere irrespective of the different geographical locations
through this VCS,
● For each different contributor to the project, a different working
copy is maintained and not merged to the main file unless the
working copy is validated. The most popular example is Git, Helix
core, Microsoft TFS,
● Helps in recovery in case of any disaster or contingent situation,
● Informs us about Who, What, When, Why changes have been
made.

Use of Version Control System:

● A repository: It can be thought of as a database of changes. It


contains all the edits and historical versions (snapshots) of the
project.
● Copy of Work (sometimes called as checkout): It is the personal
copy of all the files in a project. You can edit this copy, without
affecting the work of others and you can finally commit your
changes to a repository when you are done making your changes.
● Working in a group: Consider yourself working in a company
where you are asked to work on some live project. You can’t
change the main code as it is in production, and any change may
cause inconvenience to the user, also you are working in a team
so you need to collaborate with your team to adapt their changes.
Version control helps you with merging different requests to the
main repository without making any undesirable changes. You
may test the functionalities without putting it live, and you don’t
need to download and set up each time, just pull the changes and
do the changes, test it and merge it back. It may be visualized as.

Types of Version Control Systems:


● Local Version Control Systems
● Centralized Version Control Systems
● Distributed Version Control Systems

Local Version Control Systems: It is one of the simplest forms and has a
database that kept all the changes to files under revision control. RCS is one
of the most common VCS tools. It keeps patch sets (differences between
files) in a special format on disk. By adding up all the patches it can then re-
create what any file looked like at any point in time.

Centralized Version Control Systems: Centralized version control systems


contain just one repository globally and every user need to commit for
reflecting one’s changes in the repository. It is possible for others to see
your changes by updating.

Two things are required to make your changes visible to others which are:

● You commit
● They update
The benefit of CVCS (Centralized Version Control Systems) makes
collaboration amongst developers along with providing an insight to a
certain extent on what everyone else is doing on the project. It allows
administrators to fine-grained control over who can do what.

It has some downsides as well which led to the development of DVS. The
most obvious is the single point of failure that the centralized repository
represents if it goes down during that period collaboration and saving
versioned changes is not possible. What if the hard disk of the central
database becomes corrupted, and proper backups haven’t been kept? You
lose absolutely everything.

Distributed Version Control Systems: Distributed version control systems


contain multiple repositories. Each user has their own repository and
working copy. Just committing your changes will not give others access to
your changes. This is because commit will reflect those changes in your
local repository and you need to push them in order to make them visible on
the central repository. Similarly, When you update, you do not get others’
changes unless you have first pulled those changes into your repository.

To make your changes visible to others, 4 things are required:

● You commit
● You push
● They pull
● They update

The most popular distributed version control systems are Git, and Mercurial.
They help us overcome the problem of single point of failure.
Purpose of Version Control:

● Multiple people can work simultaneously on a single project.


Everyone works on and edits their own copy of the files and it is up
to them when they wish to share the changes made by them with
the rest of the team.
● It also enables one person to use multiple computers to work on a
project, so it is valuable even if you are working by yourself.
● It integrates the work that is done simultaneously by different
members of the team. In some rare cases, when conflicting edits are
made by two people to the same line of a file, then human
assistance is requested by the version control system in deciding
what should be done.
● Version control provides access to the historical versions of a
project. This is insurance against computer crashes or data loss. If
any mistake is made, you can easily roll back to a previous version.
It is also possible to undo specific edits that too without losing the
work done in the meanwhile. It can be easily known when, why, and
by whom any part of a file was edited.

CASE stands for Computer Aided Software Engineering. It means, development and

maintenance of software projects with help of various automated software tools.

CASE Tools
CASE tools are set of software application programs, which are used to automate SDLC

activities. CASE tools are used by software project managers, analysts and engineers to

develop software system.

There are number of CASE tools available to simplify various stages of Software

Development Life Cycle such as Analysis tools, Design tools, Project management tools,

Database Management tools, Documentation tools are to name a few.

Use of CASE tools accelerates the development of project to produce desired result and

helps to uncover flaws before moving ahead with next stage in software development.
Components of CASE Tools
CASE tools can be broadly divided into the following parts based on their use at a

particular SDLC stage:

● Central Repository - CASE tools require a central repository, which can

serve as a source of common, integrated and consistent information.

Central repository is a central place of storage where product

specifications, requirement documents, related reports and diagrams,

other useful information regarding management is stored. Central

repository also serves as data dictionary.

● Upper Case Tools - Upper CASE tools are used in planning, analysis

and design stages of SDLC.

● Lower Case Tools - Lower CASE tools are used in implementation,

testing and maintenance.

● Integrated Case Tools - Integrated CASE tools are helpful in all the

stages of SDLC, from Requirement gathering to Testing and

documentation.
CASE tools can be grouped together if they have similar functionality, process

activities and capability of getting integrated with other tools.

Scope of Case Tools


The scope of CASE tools goes throughout the SDLC.

Case Tools Types


Now we briefly go through various CASE tools

Diagram tools
These tools are used to represent system components, data and control flow among

various software components and system structure in a graphical form. For example,

Flow Chart Maker tool for creating state-of-the-art flowcharts.

Process Modeling Tools


Process modeling is method to create software process model, which is used to

develop the software. Process modeling tools help the managers to choose a process

model or modify it as per the requirement of software product. For example, EPF

Composer

Project Management Tools


These tools are used for project planning, cost and effort estimation, project

scheduling and resource planning. Managers have to strictly comply project

execution with every mentioned step in software project management. Project

management tools help in storing and sharing project information in real-time

throughout the organization. For example, Creative Pro Office, Trac Project,

Basecamp.
Documentation Tools
Documentation in a software project starts prior to the software process, goes

throughout all phases of SDLC and after the completion of the project.

Documentation tools generate documents for technical users and end users.

Technical users are mostly in-house professionals of the development team who

refer to system manual, reference manual, training manual, installation manuals etc.

The end user documents describe the functioning and how-to of the system such as

user manual. For example, Doxygen, DrExplain, Adobe RoboHelp for documentation.

Analysis Tools
These tools help to gather requirements, automatically check for any inconsistency,

inaccuracy in the diagrams, data redundancies or erroneous omissions. For example,

Accept 360, Accompa, CaseComplete for requirement analysis, Visible Analyst for

total analysis.

Design Tools
These tools help software designers to design the block structure of the software,

which may further be broken down in smaller modules using refinement techniques.

These tools provide detailing of each module and interconnections among modules.

For example, Animated Software Design

Configuration Management Tools


An instance of software is released under one version. Configuration Management

tools deal with –

● Version and revision management

● Baseline configuration management


● Change control management

CASE tools help in this by automatic tracking, version management and release

management. For example, Fossil, Git, AccuREV.

Software Cost Estimation


For any new software project, it is necessary to know how much it will cost to develop
and how much development time will it take. These estimates are needed before
development is initiated, but how is this done? Several estimation procedures have been
developed and are having the following attributes in common.

1. Project scope must be established in advanced.

2. Software metrics are used as a support from which evaluation is made.

3. The project is broken into small PCs which are estimated individually.
To achieve true cost & schedule estimate, several option arise.

4. Delay estimation

5. Used symbol decomposition techniques to generate project cost and schedule


estimates.

6. Acquire one or more automated estimation tools.

Uses of Cost Estimation

1. During the planning stage, one needs to choose how many engineers are
required for the project and to develop a schedule.

2. In monitoring the project's progress, one needs to access whether the project is
progressing according to the procedure and takes corrective action, if
necessary.
Cost Estimation Models
A model may be static or dynamic. In a static model, a single variable is taken as a key
element for calculating cost and time. In a dynamic model, all variable are
interdependent, and there is no basic variable.

Static, Single Variable Models: When a model makes use of single variables to calculate
desired values such as cost, time, efforts, etc. is said to be a single variable model. The
most common equation is:

C=aLb

Where C = Costs

L= size

a and b are constants

The Software Engineering Laboratory established a model called SEL model, for
estimating its software production. This model is an example of the static, single
variable model.

E=1.4L0.93

DOC=30.4L0.90

D=4.6L0.26

Where E= Efforts (Person Per Month)


DOC=Documentation (Number of Pages)

D = Duration (D, in months)

L = Number of Lines per code

Static, Multivariable Models: These models are based on method (1), they depend on
several variables describing various aspects of the software development environment.
In some models, several variables are needed to describe the software development
process, and the selected equation combines these variables to give the estimate of
time & cost. These models are called multivariable models.

WALSTON and FELIX develop the models at IBM provide the following equation gives a
relationship between lines of source code and effort:

E=5.2L0.91

In the same manner duration of development is given by

D=4.1L0.36

The productivity index uses 29 variables which are found to be highly correlated
productivity as follows:

Where Wi is the weight factor for the ith variable and Xi={-1,0,+1} the estimator gives
Xione of the values -1, 0 or +1 depending on the variable decreases, has no effect or
increases productivity.

Example: Compare the Walston-Felix Model with the SEL model on a software
development expected to involve 8 person-years of effort.

a. Calculate the number of lines of source code that can be produced.

b. Calculate the duration of the development.

c. Calculate the productivity in LOC/PY

d. Calculate the average manning

Solution:
The amount of manpower involved = 8PY=96 persons-months

(a)Number of lines of source code can be obtained by reversing equation to give:

Then

L (SEL) = (96/1.4)1⁄0.93=94264 LOC

L (SEL) = (96/5.2)1⁄0.91=24632 LOC

(b)Duration in months can be calculated by means of equation

D (SEL) = 4.6 (L) 0.26

= 4.6 (94.264)0.26 = 15 months

D (W-F) = 4.1 L0.36

= 4.1 (24.632)0.36 = 13 months

(c) Productivity is the lines of code produced per persons/month (year)

(d)Average manning is the average number of persons required per month in the project

COCOMO Model:-Cocomo (Constructive Cost Model) is a


regression model based on LOC, i.e number of Lines of Code. It is a
procedural cost estimate model for software projects and is often used as a
process of reliably predicting the various parameters associated with
making a project such as size, effort, cost, time, and quality. It was
proposed by Barry Boehm in 1981 and is based on the study of 63 projects,
which makes it one of the best-documented models. The key parameters
which define the quality of any software products, which are also an
outcome of the Cocomo are primarily Effort & Schedule:
● Effort: Amount of labor that will be required to complete a task. It
is measured in person-months units.
● Schedule: Simply means the amount of time required for the
completion of the job, which is, of course, proportional to the
effort put in. It is measured in the units of time such as weeks, and
months.

Different models of Cocomo have been proposed to predict the cost


estimation at different levels, based on the amount of accuracy and
correctness required. All of these models can be applied to a variety of
projects, whose characteristics determine the value of the constant to be
used in subsequent calculations. These characteristics pertaining to
different system types are mentioned below. Boehm’s definition of organic,
semidetached, and embedded systems:
1. Organic – A software project is said to be an organic type if the
team size required is adequately small, the problem is well
understood and has been solved in the past and also the team
members have a nominal experience regarding the problem.
2. Semi-detached – A software project is said to be a Semi-detached
type if the vital characteristics such as team size, experience, and
knowledge of the various programming environment lie in between
that of organic and Embedded. The projects classified as Semi-
Detached are comparatively less familiar and difficult to develop
compared to the organic ones and require more experience and
better guidance and creativity. Eg: Compilers or different
Embedded Systems can be considered of Semi-Detached types.
3. Embedded – A software project requiring the highest level of
complexity, creativity, and experience requirement fall under this
category. Such software requires a larger team size than the other
two models and also the developers need to be sufficiently
experienced and creative to develop such complex models.
1. Basic COCOMO Model
2. Intermediate COCOMO Model
3. Detailed COCOMO Model
4. Basic Model –

1. The above formula is used for the cost estimation of for the basic
COCOMO model, and also is used in the subsequent models. The
constant values a,b,c and d for the Basic Model for the different
categories of system:

Software Projects a b c d
2. 1.0 2. 0.3
Organic
4 5 5 8

3. 1.1 2. 0.3
Semi Detached
0 2 5 5

3. 1.2 2. 0.3
Embedded
6 0 5 2

1. The effort is measured in Person-Months and as evident from the


formula is dependent on Kilo-Lines of code. The development time
is measured in months. These formulas are used as such in the
Basic Model calculations, as not much consideration of different
factors such as reliability, expertise is taken into account,
henceforth the estimate is rough. Below is the C++ program for
Basic COCOMO

● CPP

● Python3
// C++ program to implement basic COCOMO

#include <bits/stdc++.h>

using namespace std;

// Function
// For rounding off float to int
int fround(float x)
{
int a;
x = x + 0.5;
a = x;
return (a);
}

// Function to calculate parameters of Basic COCOMO


void calculate(float table[][4], int n, char
mode[][15],
int size)
{
float effort, time, staff;

int model;

// Check the mode according to size

if (size >= 2 && size <= 50)


model = 0; // organic

else if (size > 50 && size <= 300)


model = 1; // semi-detached

else if (size > 300)


model = 2; // embedded

cout << "The mode is " << mode[model];


// Calculate Effort
effort = table[model][0] * pow(size,
table[model][1]);

// Calculate Time
time = table[model][2] * pow(effort,
table[model][3]);

// Calculate Persons Required


staff = effort / time;

// Output the values calculated


cout << "\nEffort = " << effort << " Person-
Month";

cout << "\nDevelopment Time = " << time << "


Months";

cout << "\nAverage Staff Required = " <<


fround(staff)
<< " Persons";
}

int main()
{
float table[3][4] = { 2.4, 1.05, 2.5, 0.38,
3.0, 1.12,
2.5, 0.35, 3.6, 1.20,
2.5, 0.32 };

char mode[][15]
= { "Organic", "Semi-Detached", "Embedded"
};

int size = 4;

calculate(table, 3, mode, size);


return 0;
}
Output:
The mode is Organic
Effort = 10.289 Person-Month
Development Time = 6.06237 Months
Average Staff Required = 2 Persons

1. Intermediate Model – The basic Cocomo model assumes that the


effort is only a function of the number of lines of code and some
constants evaluated according to the different software systems.
However, in reality, no system’s effort and schedule can be solely
calculated on the basis of Lines of Code. For that, various other
factors such as reliability, experience, Capability. These factors
are known as Cost Drivers and the Intermediate Model utilizes 15
such drivers for cost estimation. Classification of Cost Drivers and
their attributes: (i) Product attributes –
1. Required software reliability extent
2. Size of the application database
3. The complexity of the product
4. Run-time performance constraints
5. Memory constraints
6. The volatility of the virtual machine environment
7. Required turnabout time
8. Analyst capability
9. Software engineering capability
10. Applications experience
11. Virtual machine experience
12. Programming language experience
13. Use of software tools
14. Application of software engineering methods
15. Required development schedule
2. Detailed Model – Detailed COCOMO incorporates all
characteristics of the intermediate version with an assessment of
the cost driver’s impact on each step of the software engineering
process. The detailed model uses different effort multipliers for
each cost driver attribute. In detailed COCOMO, the whole
software is divided into different modules and then we apply
COCOMO in different modules to estimate effort and then sum the
effort. The Six phases of detailed COCOMO are:
1. Planning and requirements
2. System design
3. Detailed design
4. Module code and test
5. Integration and test
6. Cost Constructive model

Putnam Resource Allocation Model


The Lawrence Putnam model describes the time and effort required to finish a software
project of a specified size. Putnam makes a use of a so-called The Norden/Rayleigh
Curve to estimate project effort, schedule & defect rate as shown in fig:
Putnam noticed that software staffing profiles followed the well known Rayleigh
distribution. Putnam used his observation about productivity levels to derive the
software equation:

The various terms of this expression are as follows:

K is the total effort expended (in PM) in product development, and L is the product
estimate in KLOC .

td correlates to the time of system and integration testing. Therefore, t d can be relatively
considered as the time required for developing the product.

Ck Is the state of technology constant and reflects requirements that impede the
development of the program.
Typical values of Ck = 2 for poor development environment

Ck= 8 for good software development environment

Ck = 11 for an excellent environment (in addition to following software engineering


principles, automated tools and techniques are used).

The exact value of Ck for a specific task can be computed from the historical data of the
organization developing it.

Putnam proposed that optimal staff development on a project should follow the
Rayleigh curve. Only a small number of engineers are required at the beginning of a plan
to carry out planning and specification tasks. As the project progresses and more
detailed work is necessary, the number of engineers reaches a peak. After
implementation and unit testing, the number of project staff falls.

Effect of a Schedule change on Cost


Putnam derived the following expression:

Where, K is the total effort expended (in PM) in the product development

L is the product size in KLOC

td corresponds to the time of system and integration testing

Ck Is the state of technology constant and reflects constraints that impede the progress
of the program

Now by using the above expression, it is obtained that,

For the same product size, C =L3 / Ck3 is a constant.


(As project development effort is equally proportional to project development cost)

From the above expression, it can be easily observed that when the schedule of a project
is compressed, the required development effort as well as project development cost
increases in proportion to the fourth power of the degree of compression. It means that
a relatively small compression in delivery schedule can result in a substantial penalty of
human effort as well as development cost.

For example, if the estimated development time is 1 year, then to develop the product
in 6 months, the total effort required to develop the product (and hence the project cost)
increases 16 times.

Risk Management:
A risk is a probable problem- it might happen or it might not. There are main two
characteristics of risk
Uncertainty- the risk may or may not happen that means there are no 100%
risks.
loss – If the risk occurs in reality , undesirable result or losses will occur.
Risk management is a sequence of steps that help a software team to
understand , analyze and manage uncertainty. Risk management consists of

● Risk Identification
● Risk analysis
● Risk Planning
● Risk
Monitoring
A computer code project may be laid low with an outsized sort of risk.
so as to be ready to consistently establish the necessary risks which
could have an effect on a computer code project, it’s necessary to
reason risks into completely different categories. The project manager
will then examine the risks from every category square measure
relevant to the project.
There square measure 3 main classes of risks that may have an effect
on a computer code project:

1. Project Risks:
Project risks concern various sorts of monetary funds, schedules,
personnel, resource, and customer-related issues. A vital project risk is
schedule slippage. Since computer code is intangible, it’s terribly tough
to observe and manage a computer code project. it’s terribly tough to
manage one thing that can not be seen. For any producing project, like
producing cars, the project manager will see the merchandise taking
form.
For example, see that the engine is fitted, at the moment the area of the
door unit fitted, the automotive is obtaining painted, etc. so he will
simply assess the progress of the work and manage it. The physical
property of the merchandise being developed is a vital reason why
several computer codes come to suffer from the danger of schedule
slippage.

2. Technical Risks:
Technical risks concern potential style, implementation, interfacing,
testing, and maintenance issues. Technical risks conjointly embody
ambiguous specifications, incomplete specification, dynamic
specification, technical uncertainty, and technical degeneration. Most
technical risks occur thanks to the event team’s lean information
concerning the project.

Business Risks:
This type of risk embodies the risks of building a superb product that nobody needs, losing
monetary funds or personal commitments, etc.

Risk Management Steps in Software


Engineering

Risk Management is an important part of project planning activities. It involves


identifying and estimating the probability of risks with their order of impact on the
project.

Risk Management Steps:

There are some steps that need to be followed in order to reduce risk. These
steps are as follows:

1. Risk Identification:

Risk identification involves brainstorming activities. it also involves the


preparation of a risk list. Brainstorming is a group discussion technique where all
the stakeholders meet together. this technique produces new ideas and
promotes creative thinking.
Preparation of risk list involves identification of risks that are occurring
continuously in previous software projects.

2. Risk Analysis and Prioritization:

It is a process that consists of the following steps:

● Identifying the problems causing risk in projects


● Identifying the probability of occurrence of problem
● Identifying the impact of problem
● Assigning values to step 2 and step 3 in the range of 1 to 10
● Calculate the risk exposure factor which is the product of values of step
2 and step 3
● Prepare a table consisting of all the values and order risk on the basis
of risk exposure factor

For example,

TABLE (Required)
Probability of
Risk Impact of Risk Priorit
Problem occurrence of
No problem exposure y
problem

Issue of
R1 incorrect 2 2 4 10
password

Testing
reveals a
R2 1 9 9 7
lot of
defects

Design is
R3 2 7 14 5
not robust
3. Risk Avoidance and Mitigation:

The purpose of this technique is to altogether eliminate the occurrence of risks.


so the method to avoid risks is to reduce the scope of projects by removing non-
essential requirements.

4. Risk Monitoring:

In this technique, the risk is monitored continuously by reevaluating the risks, the
impact of risk, and the probability of occurrence of the risk.

This ensures that:

● Risk has been reduced


● New risks are discovered
● Impact and magnitude of risk are measured

Risk Analysis in project management is a sequence of processes to identify


the factors that may affect a project’s success. These processes include risk
identification, analysis of risks, risk management and control, etc. Proper risk
analysis helps to control possible future events that may harm the overall
project. It is more of a pro-active than a reactive process.
How to Manage Risk?
Risk Management in Software Engineering primarily involves following
activities:

Plan risk management


It is the procedure of defining how to perform risk management activities for a
project.

Risk Identification
It is the procedure of determining which risk may affect the project most. This
process involves documentation of existing risks.
The input for identifying risk will be
● Risk management plan
● Project scope statement
● Cost management plan
● Schedule management plan
● Human resource management plan
● Scope baseline
● Activity cost estimates
● Activity duration estimates
● Stakeholder register
● Project documents
● Procurement documents
● Communication management plan
● Enterprise environmental factor
● Organizational process assets
● Perform qualitative risk analysis
● Perform quantitative risk analysis
● Plan risk responses
● Monitor and control risks
The output of the process will be a
● Risk register

Perform qualitative risk analysis


It is the process of prioritizing risks for further analysis of project risk or action
by combining and assessing their probability of occurrence and impact. It
helps managers to lessen the uncertainty level and concentrate on high
priority risks.
Plan risk management should take place early in the project, it can impact on
various aspects for example: cost, time, scope, quality and procurement.
The inputs for qualitative Project Risk Analysis and Management includes
● Risk management plan
● Scope baseline
● Risk register
● Enterprise environmental factors
● Organizational process assets
The output of this stage would be
● Project documents updates
Quantitative risk analysis
It is the procedure of numerically analyzing the effect of identified risks on
overall project objectives. In order to minimize the project uncertainty, this kind
of analysis are quite helpful for decision making.

Risk Management Matrix

The input of this stage is


● Risk management plan
● Cost management plan
● Schedule management plan
● Risk register
● Enterprise environmental factors
● Organizational process assets
While the output will be
● Project documents updates

Plan risk responses


To enhance opportunities and to minimize threats to project objectives plan
risk response is helpful. It addresses the risks by their priority, activities into
the budget, schedule, and project management plan.
The inputs for plan risk responses are
● Risk management plan
● Risk register
While the output are
● Project management plan updates
● Project documents updates

Control Risks
Control risk is the procedure of tracking identified risks, identifying new risks,
monitoring residual risks and evaluating risk.
The inputs for this stage includes
● Software Project management plan
● Risk register
● Work performance data
● Work performance reports
The output of this stage would be
4. Work performance information
5. Change requests

6. Project management plan updates


7. Project documents updates

8. Organizational process assets updates

You might also like