Unit Ii Project Life Cycle and Effort Estimation: IT8075-Software Project Management Department of IT 2021-2022
Unit Ii Project Life Cycle and Effort Estimation: IT8075-Software Project Management Department of IT 2021-2022
Unit Ii Project Life Cycle and Effort Estimation: IT8075-Software Project Management Department of IT 2021-2022
Software processes are complex and, like all intellectual and creative
processes, rely on people making decisions and judgements. Because of the need
for judgement and creativity, attempts to automate software processes have met
with limited success. Computer-aided software engineering (CASE) tools can
support some process activities. However, there is no possibility, at least in the
next few years, of more extensive automation where software takes over creative
design from the engineers involved in the software process.
Although there are many software processes, some fundamental activities are
common to all software processes:
1. Software specification the functionality of the software and constraints on its
operation must be defined.
2. Software design and implementation the software to meet the specification must
be produced.
3. Software validation the software must be validated to ensure that it does what
the customer wants.
4. Software evolution the software must evolve to meet changing customer needs.
Choice of process models
The word "process* is sometimes used to emphasize the idea of a system in action.
In order to achieve an outcome, the system will have to execute one or more activi-
ties: this is its process. This idea can be applied to the development of computer-
based systems where a number of interrelated activities have to be undertaken to
create a final product. These activities can be organized in different ways and we
can call these process models.
A major part of the planning will be the choosing of the development methods to
be used and the slotting of these into an overall process model.
St. Joseph’s College of Engineering 1
IT8075- Software Project Management Department of IT 2021-2022
The planner needs not only to select methods but also to specify how the method is
to be applied. With methods such as SSADM, there is a considerable degree of
choice about how it is to be applied: not all parts of SSADM are compulsory. Many
student projects have the rather basic failing that at the planning stage they claim
that. say. SSADM is to be used: in the event, all that is produced are a few SSADM
fragments such as a top level data flow diagram and a preliminary logical data
structure diagram. If this is all the particular project requires, it should be stated at
the outset.
Software process models
The waterfall model
Plan-driven model. Separate and distinct phases of specification and
development.
Incremental development
Specification, development and validation are interleaved. May be plan-driven
or agile.
Reuse-oriented software engineering
The system is assembled from existing components. May be plan-driven or agile.
In practice, most large systems are developed using a process that incorporates
elements from all of these models.
The waterfall model
Spiral Model
In 1988, Barry Boehm published a formal software system development “spiral
model,” which combines some key aspect of the waterfall model and rapid
prototyping methodologies, in an effort to combine advantages of top-down and
bottom-up concepts. It provided emphasis in a key area many felt had been
neglected by other methodologies: deliberate iterative risk analysis, particularly
suited to large-scale complex systems.
Benefits
1. Align business and IT to overcome rework weariness
2. Enable enterprises to build the ideal innovation pyramid
3. Respond to transforming technologies and expectations
When to use RAD Methodology?
• When a system needs to be produced in a short span of time (2-3 months)
• When the requirements are known
• When the user will be involved all through the life cycle
• When technical risk is less
• When there is a necessity to create a system that can be modularized in 2-3
months of time
• When a budget is high enough to afford designers for modeling along with
the cost of automated tools for code generation
Agile methods
Agile software development methodology is a process for developing
software (like other software development methodologies – Waterfall model, V-
Model, Iterative model etc.) However, Agile methodology differs significantly
from other methodologies. In English, Agile means ‘ability to move quickly and
easily’ and responding swiftly to change – this is a key aspect of Agile software
development as well.
Brief overview of Agile Methodology
In traditional software development methodologies like Waterfall model, a
project can take several months or years to complete and the customer may
not get to see the end product until the completion of the project.
At a high level, non-Agile projects allocate extensive periods of time for Re-
quirements gathering, design, development, testing and UAT, before finally
deploying the project.
In contrast to this, Agile projects have Sprints or iterations which are shorter
One of the major difficulties faced while using the traditional heavy-weight
methodologies is that these require the customers to come up with all the
requirements up front. This is not achievable in most projects. Also, the traditional
heavy weight processes were too rigid and it became difficult to use these in
projects involving significant reuse and modifications to the existing code. These
issues are elaborated in the following.
Too rigid: The traditional heavy weight processes worked fine when software was
being developed from scratch. However, as we have noted in Chapter 1, the project
characteristics have changed drastically over the last couple of decades. Now
significant code reuse is made during software development and the proportion of
St. Joseph’s College of Engineering 12
IT8075- Software Project Management Department of IT 2021-2022
new code that is written is often as low as 10% of the total code of the software
being developed. An example of such projects are the customization projects. Due
to the rigidity of the traditional processes it becomes difficult to tailor these for
efficient development of modern projects requiring significant amount of code
reuse.
Incremental delivery after each time box: In agile methods, the required
features are decomposed into many small parts that can be incrementally de-
veloped. The agile model adopts an iterative approach, in the sense that each
incremental part is developed over an iteration. At a time, only one incre-
ment is planned, developed, and then deployed at the customer site. No
long-term plans are made. The time to complete an iteration is called a time
box. The implication of the term ‘time box’ is that the end date for an iteration
does not change. The development team can, however, decide to reduce the
delivered functionality during a time box if necessary, but the delivery date is
considered sacrosanct. During an iteration one or more features are analyzed,
designed, coded, and tested. Each iteration is intended to be small and easily
manageable and lasts for a couple of weeks only. For large projects, multiple
iterations may be progressing simultaneously that may be focusing on paral-
lel development of different sets of features.
Pre-project Initiation of the project, agreeing the Terms of Reference for the
work
Feasibility Typically a short phase to assess the viability and the outline
business case (justification).
Foundations Key phase for ensuring the project is understood and defined
well enough so that the scope can be baselined at a high level
and the technology components and standards agreed, before
the development activity begins.
Deployment For each Increment (set of timeboxes) of the project the solution
is made available.
Extreme Programming
Extreme Programming technique is very helpful when there is constantly
changing demands or requirements from the customers or when they are not sure
about the functionality of the system. It advocates frequent "releases" of the
product in short development cycles, which inherently improves the productivity
of the system and also introduces a checkpoint where any customer requirements
can be easily implemented. The XP develops software keeping customer in the
target.
5 values
• Communication: Everyone on a team works jointly at every stage of the project.
• Simplicity: Developers strive to write simple code bringing more value to a
product, as it saves time and efforts.
5 XP principles
• Rapid feedback:
Team members understand the given feedback and react to it right away.
• Assumed simplicity:
Developers need to focus on the job that is important at the moment and follow
YAGNI (You Ain’t Gonna Need It) and DRY (Don’t Repeat Yourself) principles.
• Incremental changes:
Small changes made to a product step by step work better than big ones made at
once.
• Embracing change:
If a client thinks a product needs to be changed, programmers should support this
decision and plan how to implement new requirements.
• Quality work:
A team that works well, makes a valuable product and feels proud of it.
In the Scrum model, the team members assume three basic roles: product owner,
scrum master, and team member. The responsibilities associated with these three
basic roles are discussed below.
Product owner: The product owner represents the customer’s perspective
and guides the team toward building the right software. In other words, in
various meetings the product owner takes the responsibility of communi-
cating the customer’s views to the development team. To this end, in every
sprint the product owner in consultation with the team members defines the
features of the software to be developed in the next sprint, decides on the re-
lease dates, and also reprioritizes the software features if needed.
Scrum master: The scrum master acts as the project manager for the project.
The responsibilities of the scrum master include removing any impediments
that the project may face, ensuring that the team is fully productive by fos-
tering close cooperation among all team members. Also, the scrum master
acts as a liaison between the customers, top management, and the team and
facilitates the development work. The scrum team is therefore shielded by
the scrum master from external interferences.
Team member: A scrum team usually consists of cross-functional team
members with expertise in areas such as quality assurance, programming,
user interface design, testing. The team is self-organizing in the sense that
the team members distribute the responsibilities among themselves, in con-
Sprint Backlog
The Sprint Backlog is a list of everything that the team commits to achieve in a
given Sprint. Once created, no one can add to the Sprint Backlog except the
Development Team. If the Development Team needs to drop an item from the
Sprint Backlog, they must negotiate it with the Product Owner. During this
negotiation, the ScrumMaster should work with the Development Team and
Product Owner to try to find ways to create some smaller increment of an item
rather than drop it altogether.
Scrum Practices
Practices are described in detailed:
In principle, the micro process represent the daily activity of the individual
developer, or of a small team of developers. Here the analysis and design
phases are intentionally blurred. Stroustrup observes that: "There are
no cookbook methods that can replace intelligence, experience, and good
taste in design and programming...The different phases of a software project,
such as design, programming, and testing cannot be strictly separated".
The macro process serves as the controlling framework of the micro process.
It represents the activities of the entire development team on the scale of
weeks to months at a time. The basic philosophy of the macro process is that
of incremental development: the system as a whole is built up step by step,
each successive version consisting of the previous ones plus a number of new
functions.
Basis of software estimation
The four basic steps in software project estimation are:
1) Estimate the size of the development product. This generally ends up in either
Lines of Code (LOC) or Function Points (FP), but there are other possible units of
measure. A discussion of the pros & cons of each is discussed in some of the
material referenced at the end of this report.
St. Joseph’s College of Engineering 24
IT8075- Software Project Management Department of IT 2021-2022
2) Estimate the effort in person-months or person-hours.
3) Estimate the schedule in calendar months.
4) Estimate the project cost in dollars (or local currency)
Information about past projects
Need to collect performance details about past project: how big were they?
How much effort/time did they need?
Differences in environmental factors such as the programming languages
used and the experience of staff.
Collect the data from International Database maintained by International
Software Benchmarking Standards Group (ISBSG) Contains data from 4800
projects
Parameters to be Estimated
⚫ Size is a fundamental measure of work
⚫ Based on the estimated size, two parameters are estimated:
◦ Effort and Duration
⚫ Effort is measured in person-months:
◦ One person-month is the effort an individual can typically put in a
month.
⚫ Duration is always measure in months. Work-month (wm) is a popular unit
for effort measurement. Also Person-month (pm) is also frequently used to
mean same as the work-month
Measure of effort
⚫ “Cost varies as product of men and months, progress does not.”
◦ Hence the man-month as a unit for measuring the size of job is a
dangerous and deceptive myth.
⚫ The myth of additional manpower
◦ Brooks Law: “Adding manpower to a late project makes it later”
Mythical Man-Month
Bottom-up estimating
Estimating methods can be generally divided into bottom-up and top-down
approaches. With the bottom-up approach, the estimator breaks the project
into its component tasks and then estimates how much effort will be required
to carry out each task.
With a large project, the process of breaking down into tasks would be a
repetitive one: each task would be analysed into its component sub-tasks and
these in turn would be further analysed. This is repeated until you get to
components that can be executed by a single person in about a week or two.
The reader might wonder why this is not called a top-down approach: after
all you are starting from the top and working down! Although this top-down
analysis is an essential precursor to bottom-up estimating, it is really a
separate one - that of producing a Work Breakdown Structure (WBS). The
bottom-up part comes in adding up the calculated effort for each activity to
get an overall estimate.
The bottom-up approach is most appropriate at the later, more detailed,
stages of project planning. If this method is used early on in the project cycle
then the estimator will have to make some assumptions about the
characteristics of the final system, for example the number and size of
software modules. These will be working assumptions that imply no
commitment when it comes to the actual design of the system.
Where a project is completely novel or there is no historical data available,
the estimator would be well advised to use the bottom-up approach.
For example, system size might be in the form 'thousands of lines of code'
(KLOC) and the productivity rate 40 days per KLOC. The values to be used
will often be matters of subjective judgement.
A model to forecast software development effort therefore has two key
components. The first is a method of assessing the size of the software
development task to be undertaken. The second assesses the rate of work at
which the task can be done.
For example. Amanda at IOE might estimate that the first software module to
be constructed is 2 KLOC. She might then judge that if Kate undertook the
development of the code, with her expertise she could work at a rate of 40
days per KLOC and complete the work in 2 x 40 days, that is. 80 days, while
Ken. who is less experienced, would need 55 days per KLOC and take 2 x 55
that is, 110 days to complete the task.
Some parametric models, such as that implied by function points, are focused
on system or task size, while others, such are COCOMO are more concerned
with productivity factors.
Having calculated the overall effort required, the problem is then to allocate
proportions of that effort to the various activities within that project.
The top-down and bottom-up approaches are not mutually exclusive. Project
managers will probably try to get a number of different estimates from
different people using different methods. Some parts of an overall estimate
could be derived using a top-down approach while other parts could be
calculated using a bottom-up method.
At the earlier stages of a project, the top-down approach would tend to be
used, while at later stages the bottom-up approach might be preferred.
Expert judgement
⚫ Asking someone who is familiar with and knowledgeable about the
application area and the technologies to provide an estimate
⚫ Particularly appropriate where existing code is to be modified
St. Joseph’s College of Engineering 28
IT8075- Software Project Management Department of IT 2021-2022
⚫ Research shows that experts judgement in practice tends to be based on
analogy
Estimating by analogy
⚫ It is also called case-based reasoning .
⚫ For a new project the estimator identifies the previous completed projects
that have similar characteristics to it .
⚫ The new project is referred to as the target project or target case
⚫ The completed projects are referred to as the source projects or source case
⚫ The effort recorded for the matching source case is used as the base estimate
for the target project
⚫ The estimator calculates an estimate for the new project by adjusting the (
base estimate ) based on the differences that exist between the two projects
Example
Assume that cases are matched on the basis of two parameters , the number of
inputs and the number of outputs .
• The new project ( target case ) requires 7 inputs and 15 output
• You are looking into two past cases ( source cases ) to find a better analogy with
the target project :
• Project A : has 8 inputs and 17 outputs .
• Project B : has 5 inputs and 10 outputs .
Which is a more closer match for the new project A or project B ?
Distance between new project and project A :
Square-root of (( 7-8 ) 2 + ( 15-17 ) 2 )= 2.24
Distance between new project and project B :
Square-root of (( 7-5 ) 2 + ( 15-10 ) 2 )= 5.39
Project A is a better match because it has less distance than project B to the new
project
The basis for measurement must be found, just as in FPA, in the user requirements
the software must fulfil. The result of the measurement must be independent of the
development environment and the method used to specify these requirements.
Sizes depend only on the user require.
From a pure size measurement point of view, the most important improvements of
the COSMIC method compared with using traditional Function Points are as
follows
The COSMIC method was designed to measure the functional requirements
of software in the domains of business application, real-time and infrastructure
software (e.g. operating systems, web components, etc.), in any layer of a multi-
layer architecture and at any level of decomposition. Traditional Function
Points were designed to measure only the functionality ‘seen’ by human users
Users of the COSMIC method have reported the following benefits, compared with
using '1st generation' methods
Easy to learn and stable due to the principles-based approach, hence 'future
proof' and cost-effective to implement;
Well-accepted by project staff due to the ease of mapping of the method’s
concepts to modern software requirements documentation methods, and to its
compatibility with modern software architectures;
Improves estimating accuracy, especially for larger software projects;
Possible to size requirements automatically that are held in CASE tools;
Reveals real performance improvement where using traditional function
points has not indicated any improvement due to their inability to recognise
how software processes have increased in size over time;
Sizing with COSMIC is an excellent way of controlling the quality of the re-
quirements at all stages as they evolve.
COCOMO a parametric model
COCOMO (Constructive Cost Estimation Model) was proposed by Boehm.
According to him, any software development project can be classified into one of
the following three categories based on the development complexity: organic,
semidetached, and embedded. The classification is done considering the
characteristics of the product as well as those of the development team and
development environment. Usually these three product classes correspond to
application, utility and system programs, respectively. Data processing programs
are normally considered to be application programs. Compilers, linkers, etc., are
utility programs. Operating systems and real-time system programs, etc. are
system programs.
The definition of organic, semidetached, and embedded systems are elaborated
below.
Organic: A development project can be considered of organic type, if the pro-
ject deals with developing a well understood application program, the size of
St. Joseph’s College of Engineering 32
IT8075- Software Project Management Department of IT 2021-2022
the development team is reasonably small, and the team members are experi-
enced in developing similar types of projects.
Semidetached: A development project can be considered of semidetached
type, if the development consists of a mixture of experienced and inexperi-
enced staff. Team members may have limited experience on related systems
but may be unfamiliar with some aspects of the system being developed.
Embedded: A development project is considered to be of embedded type, if
the software being developed is strongly coupled to complex hardware, or if
the stringent regulations on the operational procedures exist.
Estimates are required at different stages in the system life cycle and COCOMO II
has been designed to accommodate this by having models for three different
stages.
Application composition
Where the external features of the system that the users will experience are de-
signed. Prototyping will typically be employed to do this. With small applications
that can be built using high-productivity application-building tools, development
can stop at this point.
Early design
Where the fundamental software structures are designed. With larger, more de-
manding systems, where, for example, there will be large volumes of transactions
and performance is important, careful attention will need to be paid to the architec-
ture to be adopted.
Post architecture
Where the software structures undergo final construction, modification and tuning
to create a system that will perform as required.
To estimate the effort for application composition, the counting of object points,
which w ere described earlier, is recommended by the developers of COCOMO II.
At the early design stage. I Ps are recommended as the way of gauging a basic sys-
tem size. An FP count might be converted to a SI.OC equivalent by multiplying the
I Ps by a factor for the programming language that is to be used.