Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit Ii Project Life Cycle and Effort Estimation: IT8075-Software Project Management Department of IT 2021-2022

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

IT8075- Software Project Management Department of IT 2021-2022

UNIT II PROJECT LIFE CYCLE AND EFFORT ESTIMATION

Software process and Process Models – Choice of Process models – Rapid


Application development – Agile methods – Dynamic System Development
Method – Extreme Programming– Managing interactive processes – Basics of
Software estimation – Effort and Cost estimation techniques – COSMIC Full
function points – COCOMO II – a Parametric Productivity Model.

Software Process And Process Models


A software process is a set of activities that leads to the production of a software
product. These activities may involve the development of software from scratch in
a standard programming language like Java or C. Increasingly, however, new
software is developed by extending and modifying existing systems and by
configuring and integrating off-the-shelf software or system components.

Software processes are complex and, like all intellectual and creative
processes, rely on people making decisions and judgements. Because of the need
for judgement and creativity, attempts to automate software processes have met
with limited success. Computer-aided software engineering (CASE) tools can
support some process activities. However, there is no possibility, at least in the
next few years, of more extensive automation where software takes over creative
design from the engineers involved in the software process.

Although there are many software processes, some fundamental activities are
common to all software processes:
1. Software specification the functionality of the software and constraints on its
operation must be defined.
2. Software design and implementation the software to meet the specification must
be produced.
3. Software validation the software must be validated to ensure that it does what
the customer wants.
4. Software evolution the software must evolve to meet changing customer needs.
Choice of process models
The word "process* is sometimes used to emphasize the idea of a system in action.
In order to achieve an outcome, the system will have to execute one or more activi-
ties: this is its process. This idea can be applied to the development of computer-
based systems where a number of interrelated activities have to be undertaken to
create a final product. These activities can be organized in different ways and we
can call these process models.
A major part of the planning will be the choosing of the development methods to
be used and the slotting of these into an overall process model.
St. Joseph’s College of Engineering 1
IT8075- Software Project Management Department of IT 2021-2022
The planner needs not only to select methods but also to specify how the method is
to be applied. With methods such as SSADM, there is a considerable degree of
choice about how it is to be applied: not all parts of SSADM are compulsory. Many
student projects have the rather basic failing that at the planning stage they claim
that. say. SSADM is to be used: in the event, all that is produced are a few SSADM
fragments such as a top level data flow diagram and a preliminary logical data
structure diagram. If this is all the particular project requires, it should be stated at
the outset.
Software process models
The waterfall model
 Plan-driven model. Separate and distinct phases of specification and
development.
Incremental development
 Specification, development and validation are interleaved. May be plan-driven
or agile.
Reuse-oriented software engineering
 The system is assembled from existing components. May be plan-driven or agile.
In practice, most large systems are developed using a process that incorporates
elements from all of these models.
The waterfall model

Fundamental development activities:


1. Requirements analysis and definition The system's services, constraints and
goals are, established by consultation with system users. They are then defined in
detail and serve as a system specification.
2. System and software design the systems design process partitions the
St. Joseph’s College of Engineering 2
IT8075- Software Project Management Department of IT 2021-2022
requirements to either hardware or software systems. It establishes overall system
architecture. Software design involves identifying and describing the fundamental
software system abstractions and their relationships.
3. Implementation and unit testing during this stage, the software design is realised
as a set of programs or program units. Unit testing involves verifying that each
unit meets its specification.
4. Integration and system testing the individual program units or programs are
integrated and tested as a complete system to ensure that the software
requirements have been met. After testing, the software system is delivered to the
customer.
5. Operation and maintenance normally (although not necessarily) this is the
longest life-cycle phase. The system is installed and put into practical use.
Maintenance: involves correcting errors which were not discovered in earlier
stages of the life cycle, improving the implementation of system units and
enhancing the system’s services as new requirements are discovered.
In principle, the result of each phase is one or more documents that are
approved ('signed off). The following phase: should not start until the previous
phase has finished. In practice, these stages overlap and fled information to each
other. During design, problems with requirements are identified; during coding
design problems are found and so on. The software process is not a simple linear
model but involves a sequence of iterations of the development activities. The
waterfall model is a traditional engineering approach applied to software
engineering.
A strict waterfall approach discourages revisiting and revising any prior
phase once it is complete. This “inflexibility” in a pure waterfall model has been a
source of criticism by supporters of other more “flexible” models. It has been
widely blamed for several large-scale government projects running over budget,
over time and sometimes failing to deliver on requirements due to the Big Design
Up Front approach. Except when contractually required, the waterfall model has
been largely superseded by more flexible and versatile methodologies developed
specifically for software development.
 V model approach is an extension of waterfall where different testing phases are
identified which check the quality of different development phases
 For each development stage there is a testing stage
 The testing associated with different stages serves different purposes e.g. system
testing tests that components work together correctly, user acceptance testing
that users can use system to carry out their work
Verification versus Validation
 Verification is the process of determining:
 Whether output of one phase of development conforms to its previous
phase.
 Validation is the process of determining:
 Whether a fully developed system conforms to its SRS document.
St. Joseph’s College of Engineering 3
IT8075- Software Project Management Department of IT 2021-2022
 Verification is concerned with phase containment of errors:
 Whereas the aim of validation is that the final product be error free.

Spiral Model
In 1988, Barry Boehm published a formal software system development “spiral
model,” which combines some key aspect of the waterfall model and rapid
prototyping methodologies, in an effort to combine advantages of top-down and
bottom-up concepts. It provided emphasis in a key area many felt had been
neglected by other methodologies: deliberate iterative risk analysis, particularly
suited to large-scale complex systems.

The basic principles are:


 Focus is on risk assessment and on minimizing project risk by breaking a
project into smaller segments and providing more ease-of-change during the
development process, as well as providing the opportunity to evaluate risks

St. Joseph’s College of Engineering 4


IT8075- Software Project Management Department of IT 2021-2022
and weigh consideration of project continuation throughout the life cycle.
 “Each cycle involves a progression through the same sequence of steps, for
each part of the product and for each of its levels of elaboration, from an
overall concept-of-operation document down to the coding of each
individual program.”
 Each trip around the spiral traverses four basic quadrants:
(1) determine objectives, alternatives, and constraints of the iteration;
(2) evaluate alternatives; Identify and resolve risks;
(3) develop and verify deliverables from the iteration; and
(4) plan the next iteration.
 Begin each cycle with an identification of stakeholders and their “win
conditions”, and end each cycle with review and commitment.
Software Prototyping
A prototype is a working model of one or more aspects of the projected system. It
is constructed and tested quickly and inexpensively in order to test out
assumptions.
Prototypes can be classified as throw-away, evolutionary or incremental.
 Throw-away prototypes
Here the prototype is used only to test out some ideas and is then discarded
when the development of the operational system is commenced. The
prototype could be developed using a different software environment (for
example, a 4GL as opposed to a 3GL for the final system where machine
efficiency is important) or even on a different kind of hardware platform.
 Evolutionary prototypes
The prototype is developed and modified until it is finally in a state where it
can become the operational system. In this case, the standards that are used
to develop the software have to be carefully considered.
The most important justification for a prototype is the need to reduce uncertainty
by conducting an experiment. Incremental prototypes It could be argued that this
is, strictly speaking, not prototyping. The operational system is developed and
implemented in small stages so that the feed-back from the earlier stages can
influence the development of the later stages.
Some of the reasons that have been put forward for prototyping are the
following.
 Learning by doing When we have just done something for the first time we can
usually look back and see where we have made mistakes.
 Improved communication Users are often reluctant to read the massive
documents produced by structured methods. Even if they do read this
documentation, they do not get a feel for how the system is likely to work in
practice.
 Improved user involvement The users may be more actively involved in design
decisions about the new system.
 Clarification of partially-known requirements Where there is no existing
St. Joseph’s College of Engineering 5
IT8075- Software Project Management Department of IT 2021-2022
system to mimic, users can often get a better idea of what might be useful to
them in a potential system by trying out prototypes.
 Demonstration of the consistency and completeness of a specification Any
mechanism that attempts to implement a specification on a computer is likely to
uncover ambiguities and omissions.
 Reduced need for documentation Because a working prototype can be examined,
there is less need for detailed documentation. Some may argue, however, that
this is a very dangerous suggestion.
 Reduced maintenance costs (that is, changes after the system goes live) If the
user is unable to suggest modifications at the prototyping stage the chances are
that they will ask for the changes as modifications to the operational system.
This reduction of maintenance costs is the main plank in the financial case for
creating prototypes.
 Feature constraint If an application building tool is used, then the prototype
will tend to have features that are easily implemented by that tool. A paper-
based design might suggest features that are expensive to implement.
 Production of expected results The problem with creating test runs is generally
not the creation of the test data but the accurate calculation of the expected
results. A prototype can be of assistance here.

Software prototyping is not without its drawbacks and dangers, however.


 Users sometimes misunderstand the role of the prototype For example, they
might expect the prototype to be as robust as an operational system when
incorrect data is entered or they might expect the prototype to have as fast a
response as the operational system, although this was not the intention.
 Lack of project standards possible Evolutionary prototyping could just be an
excuse for a sloppy 'hack it out and see what happens' approach.
 Lack of control It is sometimes difficult to control the prototyping cycle as the
driving force could be the users' propensity to try out new things.
 Additional expense Building and exercising a prototype will incur additional
expenses. The additional expense might be less than expected, however, because
many analysis and design tasks would have to be undertaken anyway. Some
research suggests that typically there is a 10% extra cost to produce a prototype.
 Machine efficiency A system built through prototyping, while sensitive to the
users' needs, might not be as efficient in machine terms as one developed using
more conventional methods.
 Close proximity of developers Prototyping often means that code developers
have to be sited close to the users. One trend has been for organizations in
developed countries to get program coding done cheaply in Third World
countries such as India. Prototyping generally will not allow this.
Incremental delivery
The approach Engineering Management involves breaking the system down into
small components which are published by implemented and delivered in
St. Joseph’s College of Engineering 6
IT8075- Software Project Management Department of IT 2021-2022
sequence. Each component that is delivered , actually give some benefit to the user

Advantages of this approach


These are some of the justifications given for the approach:
 the feedback from early increments can influence the later stages;
 the possibility of changes in requirements is not so great as with large
monolithic projects because of the shorter timespan between the design of a
component and its delivery;
 users get benefits earlier than with a conventional approach;
 early delivery of some useful components improves cash flow, because you
get some return on investment early on;
 smaller sub-projects are easier to control and manage;
 'gold-plating', the requesting of features that are unnecessary and not in fact
used, should be less as users will know that they get more than one
opportunity to make their requirements known: if a feature is not in the
current increment then it can be included in the next;
 the project can be temporarily abandoned if more urgent work crops up;
 job satisfaction is increased for developers who see their labours bearing fruit
at regular, short, intervals.
Disadvantages
On the other hand these disadvantages have been put forward:
 'software breakage', that is, later increments might require the earlier
increments to be modified;
 Developers might be more productive working on one large system than on a
series of smaller ones.
The incremental delivery plan
The content of each increment and the order in which the increments are to be
delivered to the users of the system have to be planned at the outset.
Basically the same process has to be undertaken as in strategic planning but at a
more detailed level where the attention is given to increments of a user application

St. Joseph’s College of Engineering 7


IT8075- Software Project Management Department of IT 2021-2022
rather than whole applications. The elements of the incremental plan are the
system objectives, incremental plan and the open technology plan.
Identify system objectives
The purpose is to give an idea of the 'big picture', that is, the overall objectives
that the system is to achieve. These can then expanded into more specific
functional goals and quality goals. Functional goals will include:
 objectives it is intended to achieve;
 computer/non-computer functions to achieve them.
In addition, measurable quality characteristics should be defined, such as
reliability, response and security. This reflects concern that system developers
always keep sight of the objectives that they are trying to achieve on behalf of their
clients. In the quickly changing environment of an application, individual
requirements may change over the course of the project, but the objectives should
not.
Plan increments
Having defined the overall objectives, the next stage is to plan the increments using
the following guidelines:
 steps typically should consist of 1 % to 5% of the total project;
 non-computer steps may be included - but these must deliver benefits
directly to the users;
 ideally, an increment should take one month or less and should never take
more than three months;
 each increment should deliver some benefit to the user;
 some increments will be physically dependent on others;
 value-to-cost ratios may be used to decide priorities (see below).
Very often a new system will be replacing an old computer system and the first
increments may use parts of the old system. For example, the data for the database
of the new system may initially be obtained from the old system's standing files.
Which steps should be first? Some steps might be prerequisites because of physical
dependencies but others can be in any order. Value-to-cost ratios can be used to
establish the order in which increments are to be developed. The customer is asked
to rate each increment with a score in the range 1-10 in terms of its value. The
developers also rate the cost of developing each of the increments with a score in
the range 0-10. This might seem a rather crude way of evaluating costs and
benefits, but people are often unwilling to be more precise. By then dividing the
value rating by the cost rating, a rating which indicates the relative 'value for
money' of each increment is derived.
The value to cost ratio = V/C
where V is a score 1-10 representing value to customer and
C is a score 0-10 representing cost.
V/C ratios: an example
St. Joseph’s College of Engineering 8
IT8075- Software Project Management Department of IT 2021-2022
Steps Value Cost Ratio Rank
profit reports 9 1 9 2nd
online database 1 9 0.11 5th
ad hoc enquiry 5 5 1 4th
purchasing plans 9 4 2.25 3rd

profit-based pay 9 0 ∞ 1st


Create open technology plan
If the system is to be able to cope with new components being continually added
then it has to be built so that it is extendible, portable and maintainable.
As a minimum this will require the use of:
 a standard high level language;
 a standard operating system;
 variable parameters, for example, items such as organization name,
department names and charge rates are held in a parameter file that can be
amended without programmer intervention;
 a standard database management system.
These are all things that might be expected as a matter of course in a modern IS
development environment.
Rapid Application development
 Rapid Application Development model relies on prototyping and rapid cycles
of iterative development to speed up development and elicit early feedback
from business users. RAD focuses on gathering customer requirements through
workshops or focus groups, early testing of the prototypes by the customer
using iterative concept, reuse of the existing prototypes (components),
continuous integration and rapid delivery.
 After each iteration, developers can refine and validate the features with
stakeholders. Development of each module involves the various basic steps as
in waterfall model i.e analyzing, designing, coding and then testing, etc.
 Another striking feature of this model is a short time span i.e the time frame for
delivery(time-box) is generally 60-90 days.
o RAD model is also characterized by reiterative user testing and the re-use
of software components. Hence, RAD has been instrumental in reducing
the friction points in delivering successful enterprise applications.
o WaveMaker makes use of the RAD model to provide a Rapid Application
Development platform to create web and mobile applications. The
following diagram depicts WaveMaker RAD platform architecture, based
on the MVC (Model-View-Controller) pattern. Open standards, easy
customization and rapid prototyping are central to the platform.

St. Joseph’s College of Engineering 9


IT8075- Software Project Management Department of IT 2021-2022

RAD Model Design


RAD model distributes the analysis, design, build and test phases into a series of
short, iterative development cycles.
Following are the various phases of the RAD Model −
Business Modelling
The business model for the product under development is designed in terms of
flow of information and the distribution of information between various business
channels. A complete business analysis is performed to find the vital information
for business, how it can be obtained, how and when is the information processed
and what are the factors driving successful flow of information.
Data Modelling
The information gathered in the Business Modelling phase is reviewed and
analyzed to form sets of data objects vital for the business. The attributes of all
data sets is identified and defined. The relation between these data objects are
established and defined in detail in relevance to the business model.
Process Modelling
The data object sets defined in the Data Modelling phase are converted to establish
the business information flow needed to achieve specific business objectives as per
the business model. The process model for any changes or enhancements to the
data object sets is defined in this phase. Process descriptions for adding, deleting,
retrieving or modifying a data object are given.
Application Generation
The actual system is built and coding is done by using automation tools to convert
process and data models into actual prototypes.
Testing and Turnover
The overall testing time is reduced in the RAD model as the prototypes are
independently tested during every iteration. However, the data flow and the
interfaces between all the components need to be thoroughly tested with complete
test coverage. Since most of the programming components have already been
tested, it reduces the risk of any major issues.
The following illustration describes the RAD Model in detail.

St. Joseph’s College of Engineering 10


IT8075- Software Project Management Department of IT 2021-2022

Benefits
1. Align business and IT to overcome rework weariness
2. Enable enterprises to build the ideal innovation pyramid
3. Respond to transforming technologies and expectations
When to use RAD Methodology?
• When a system needs to be produced in a short span of time (2-3 months)
• When the requirements are known
• When the user will be involved all through the life cycle
• When technical risk is less
• When there is a necessity to create a system that can be modularized in 2-3
months of time
• When a budget is high enough to afford designers for modeling along with
the cost of automated tools for code generation
Agile methods
Agile software development methodology is a process for developing
software (like other software development methodologies – Waterfall model, V-
Model, Iterative model etc.) However, Agile methodology differs significantly
from other methodologies. In English, Agile means ‘ability to move quickly and
easily’ and responding swiftly to change – this is a key aspect of Agile software
development as well.
Brief overview of Agile Methodology
 In traditional software development methodologies like Waterfall model, a
project can take several months or years to complete and the customer may
not get to see the end product until the completion of the project.
 At a high level, non-Agile projects allocate extensive periods of time for Re-
quirements gathering, design, development, testing and UAT, before finally
deploying the project.
 In contrast to this, Agile projects have Sprints or iterations which are shorter

St. Joseph’s College of Engineering 11


IT8075- Software Project Management Department of IT 2021-2022
in duration (Sprints/iterations can vary from 2 weeks to 2 months) during
which pre-determined features are developed and delivered.
 Agile projects can have one or more iterations and deliver the complete
product at the end of the final iteration.

One of the major difficulties faced while using the traditional heavy-weight
methodologies is that these require the customers to come up with all the
requirements up front. This is not achievable in most projects. Also, the traditional
heavy weight processes were too rigid and it became difficult to use these in
projects involving significant reuse and modifications to the existing code. These
issues are elaborated in the following.

Difficult to accommodate change requests: The heavyweight processes are based


on making a long term project management plan and comprehensive design for the
project based on the requirements specified upfront. Therefore, any change to the
requirements afterwards requires making changes to the various plans and
designs. However, in modern software development projects change requests from
customers have become common during execution of the project. Often, as much as
50% requirements are either new or modified after the project has started. Needless
to say that frequent requirement changes lead to lot of rework and render
traditional heavy-weight processes inefficient.

Documentation-driven: A disadvantage of the traditional heavyweight


methodologies is that these mandate production of voluminous documents, even
though these may be rarely referred to by any one later. Usually a lot of effort is
invested in preparing the documentations in the heavyweight development
methodologies. According to some estimates, as much as 50% of the development
efforts are invested in preparing the documentations.

Too rigid: The traditional heavy weight processes worked fine when software was
being developed from scratch. However, as we have noted in Chapter 1, the project
characteristics have changed drastically over the last couple of decades. Now
significant code reuse is made during software development and the proportion of
St. Joseph’s College of Engineering 12
IT8075- Software Project Management Department of IT 2021-2022
new code that is written is often as low as 10% of the total code of the software
being developed. An example of such projects are the customization projects. Due
to the rigidity of the traditional processes it becomes difficult to tailor these for
efficient development of modern projects requiring significant amount of code
reuse.

The following principles are central to the agile methods.

 Incremental delivery after each time box: In agile methods, the required
features are decomposed into many small parts that can be incrementally de-
veloped. The agile model adopts an iterative approach, in the sense that each
incremental part is developed over an iteration. At a time, only one incre-
ment is planned, developed, and then deployed at the customer site. No
long-term plans are made. The time to complete an iteration is called a time
box. The implication of the term ‘time box’ is that the end date for an iteration
does not change. The development team can, however, decide to reduce the
delivered functionality during a time box if necessary, but the delivery date is
considered sacrosanct. During an iteration one or more features are analyzed,
designed, coded, and tested. Each iteration is intended to be small and easily
manageable and lasts for a couple of weeks only. For large projects, multiple
iterations may be progressing simultaneously that may be focusing on paral-
lel development of different sets of features.

 Face-to-face communication: Agile model emphasizes face-to-face commu-


nication over written documents. In order to facilitate face-to-face communi-
cation, the team size is deliberately kept small (5-9 people) and the team
members share the same office space. This makes the agile model well-suited
to be used in small projects. However, large projects can also be executed us-
ing the agile model through suitable adaptations of the model. In a large pro-
ject, it is likely that the collaborating teams might work at different
locations. In this case, the different teams are needed to maintain daily
contact through various communication channels such as video
conferencing, telephone, and e-mail.

 Customer interactions: In order to foster close interactions with the custom-


er, an agile project usually includes a few customer representatives in the
team. Customer participation in the project is further enhanced by inviting
the customer representatives along with the stakeholders at the end of each
iteration to review the progress made, re-evaluate the requirements, and
give necessary feedbacks to the development team.

 Minimal documentation: Agile documents are created on a need to know ba-


sis. That is, a piece of information is documented only if it is expected to be
St. Joseph’s College of Engineering 13
IT8075- Software Project Management Department of IT 2021-2022
referred to by some stakeholders or is needed for communicating with some
external group. Also agile documents describe information that is unlikely to
change. Consequently, the documentation is light, and it not only saves sig-
nificant amount of documentation preparation effort, it also becomes easy to
keep these uptodate and consistent.

 Pair programming: Agile development projects usually deploy pair


programming. In this approach, two programmers work together at one
work station. One of the programmers types in the code, while the other
reviews the code as it is typed in. The two programmers switch their roles
every hour or so. Several studies indicate that programmers working in pairs
produce compact well-written code and commit fewer errors as compared to
programmers working individually.

Example of Agile software development


Example: Google is working on project to come up with a competing product for
MS Word, that provides all the features provided by MS Word and any other
features requested by the marketing team. The final product needs to be ready in
10 months of time. Let us see how this project is executed in traditional and Agile
methodologies.
In traditional Waterfall model –
 At a high level, the project teams would spend 15% of their time on gathering
requirements and analysis (1.5 months)
 20% of their time on design (2 months)
 40% on coding (4 months) and unit testing
 20% on System and Integration testing (2 months).
 At the end of this cycle, the project may also have 2 weeks of User Ac-
ceptance testing by marketing teams.
 In this approach, the customer does not get to see the end product until the
end of the project, when it becomes too late to make significant changes.
project schedule in traditional software development.

With Agile development methodology –


 In the Agile methodology, each project is broken up into several ‘Iterations’.
 All Iterations should be of the same time duration (between 2 to 8 weeks).

St. Joseph’s College of Engineering 14


IT8075- Software Project Management Department of IT 2021-2022
 At the end of each iteration, a working product should be delivered.
 In simple terms, in the Agile approach the project will be broken up into 10
releases (assuming each iteration is set to last 4 weeks).
 Rather than spending 1.5 months on requirements gathering, in Agile soft-
ware development, the team will decide the basic core features that are re-
quired in the product and decide which of these features can be developed in
the first iteration.
 Any remaining features that cannot be delivered in the first iteration will be
taken up in the next iteration or subsequent iterations, based on priority.
 At the end of the first iterations, the team will deliver a working software
with the features that were finalized for that iteration. There will be 10 itera-
tions and at the end of each iteration the customer is delivered a working
software that is incrementally enhanced and updated with the features that
were shortlisted for that iteration.
1. Customer collaboration over contract negotiation
2. Responding to change over following a plan
Dynamic Systems Development Method (DSDM)/Atern
Dynamic Systems Development Method is an iterative approach to software
development but adds additional discipline and structure to the process. Central to
DSDM is the principle that “any project must be aligned to clearly defined strategic
goals and focus upon early delivery of real benefits to the business.”
DSDM is structured around eight key principles
 Focus on the Business Need: DSDM teams must establish a valid business
case and ensure organizational support throughout the project
 Deliver on Time: Work should be time-boxed and predictable, to build con-
fidence in the development team.
 Collaborate: DSDM teams must involve stakeholders throughout the project
and empower all members of the team to make decisions.
 Quality: To ensure high quality, the level of quality should be agreed with
the business at the start of the project. This is enforced through continuous
testing, review, and documentation.
 Build Incrementally from Firm Foundations: Teams must do Enough
Design Work Up Front (EDUF) to ensure they know exactly what to build,
but not too much to slow development.
 Developer Iteratively: Take feedback from the business and use this to
continually improve with each development iteration. Teams must also
recognize that details emerge as the project or product develops and they
must respond to this.
 Communicate Continuously and Clearly: Holding daily stand-up sessions,
encouraging informal communication, running workshops and building pro-
totypes are all key DSDM tools. Communicating through documents is dis-
couraged - instead, documentation must be lean and timely.
 Demonstrate Control: The Project Manager and Team Leader should make

St. Joseph’s College of Engineering 15


IT8075- Software Project Management Department of IT 2021-2022
their plans and progress visible to all and focus on successful delivery.
Atern/DSDM framework
The feasibility/business study stage will not only look at the business feasibility of
the proposed project, but also as whether DSDM would be the best framework for
it. Applications where there is a prominent user interface would be prime
candidates.

Phase Key Responsibilities

Pre-project Initiation of the project, agreeing the Terms of Reference for the
work

Feasibility Typically a short phase to assess the viability and the outline
business case (justification).

Foundations Key phase for ensuring the project is understood and defined
well enough so that the scope can be baselined at a high level
and the technology components and standards agreed, before
the development activity begins.

Exploration Iterative development phase during which teams expand on the


high level requirements to demonstrate the functionality

Engineering Iterative development phase where the solution is engineered to


be deployable for release

Deployment For each Increment (set of timeboxes) of the project the solution
is made available.

Post project Assesses the accrued benefits.

St. Joseph’s College of Engineering 16


IT8075- Software Project Management Department of IT 2021-2022
Core techniques
 Timeboxing: is the approach for completing the project incrementally by
breaking it down into splitting the project in portions, each with a fixed
budget and a delivery date. For each portion a number of requirements are
prioritised and selected. Because time and budget are fixed, the only
remaining variables are the requirements. So if a project is running out of
time or money the requirements with the lowest priority are omitted. This
does not mean that an unfinished product is delivered, because of the Pareto
Principle that 80% of the project comes from 20% of the system requirements,
so as long as those most important 20% of requirements are implemented
into the system, the system therefore meets the business needs and that no
system is built perfectly in the first try.
 MoSCoW: is a technique for prioritising work items or requirements. It is an
acronym that stands for:
 MUST have
 SHOULD have
 COULD have
 WON'T have

Extreme Programming
Extreme Programming technique is very helpful when there is constantly
changing demands or requirements from the customers or when they are not sure
about the functionality of the system. It advocates frequent "releases" of the
product in short development cycles, which inherently improves the productivity
of the system and also introduces a checkpoint where any customer requirements
can be easily implemented. The XP develops software keeping customer in the
target.
5 values
• Communication: Everyone on a team works jointly at every stage of the project.
• Simplicity: Developers strive to write simple code bringing more value to a
product, as it saves time and efforts.

St. Joseph’s College of Engineering 17


IT8075- Software Project Management Department of IT 2021-2022
• Feedback: Team members deliver software frequently, get feedback about it, and
improve a product according to the new requirements.
• Respect: Every person assigned to a project contributes to a common goal.
• Courage: Programmers objectively evaluate their own results without making
excuses and are always ready to respond to changes.
These values represent a specific mindset of motivated team players who do their
best on the way to achieving a common goal. XP principles derive from these
values and reflect them in more concrete ways.

5 XP principles
• Rapid feedback:
Team members understand the given feedback and react to it right away.
• Assumed simplicity:
Developers need to focus on the job that is important at the moment and follow
YAGNI (You Ain’t Gonna Need It) and DRY (Don’t Repeat Yourself) principles.
• Incremental changes:
Small changes made to a product step by step work better than big ones made at
once.
• Embracing change:
If a client thinks a product needs to be changed, programmers should support this
decision and plan how to implement new requirements.
• Quality work:
A team that works well, makes a valuable product and feels proud of it.

XP suggests using 12 practices while developing software. As XP is defined by


values and principles, its practices also represent them and can be clustered into
four groups.
 Test-Driven Development
Is it possible to write a clear code quickly? The answer is yes, according to XP
practitioners. The quality of software derives from short development cycles that,
in turn, allow for receiving frequent feedback. And valuable feedback comes from
good testing. XP teams practice test-driven development technique (TTD) that

St. Joseph’s College of Engineering 18


IT8075- Software Project Management Department of IT 2021-2022
entails writing an automated unit test before the code itself. According to this
approach, every piece of code must pass the test to be released. So, software
engineers thereby focus on writing code able to accomplish the needed function.
That’s the way TDD allows programmers to use immediate feedback to produce
reliable software
 The Planning Game
This is a meeting that occurs at the beginning of an iteration cycle. The
development team and the customer get together to discuss and approve a
product’s features. At the end of the planning game, developers plan for the
upcoming iteration and release, assigning tasks for each of them.
 On-site Customer
According to XP, the end customer should fully participate in development. The
customer should be present all the time to answer team questions, set priorities,
and resolve disputes, if necessary.
 Pair Programming
This practice requires two programmers to work jointly on the same code. While
the first developer focuses on writing, the other one reviews code, suggests
improvements, and fixes mistakes along the way. Such teamwork results in high-
quality software, faster knowledge sharing but takes 15 to 60 percent more time. In
this regard, it’s more reasonable trying pair programming for long-term projects.
 Code Refactoring
To deliver business value with well-designed software in every short iteration, XP
teams also use refactoring. The goal of this technique is to continuously improve
code. Refactoring is about removing redundancy, eliminating unnecessary
functions, increasing code coherency, and at the same time decoupling
elements. Keep your code clean and simple, so you can easily understand and
modify it when required would be the advice of any XP team member.
 Continuous Integration
Developers always keep the system fully integrated. XP teams take iterative
development to another level because they commit code multiple times a day,
which is also called continuous delivery. XP practitioners understand the
importance of communication. Programmers discuss which parts of the code can
be re-used or shared. This way, they know exactly what functionality they need to
develop. The policy of shared code helps eliminate integration problems. In
addition, automated testing allows developers to detect and fix errors early, before
deployment.
 Small Releases
This practice suggests releasing the first version quickly and further developing the
product by making small and incremental updates. Small releases allow
developers to frequently receive feedback, detect bugs early, and monitor how the
product works in production. One of the methods of doing so is the continuous
integration practice (CI) we mentioned before.
 Simple Design

St. Joseph’s College of Engineering 19


IT8075- Software Project Management Department of IT 2021-2022
The best design for software is the simplest one that works. If any complexity is
found, it should be removed. The right design should pass all tests, have no
duplicate code, and contain the fewest possible methods and classes. It should also
clearly reflect the programmer’s intent.
XP practitioners highlight that chances to simplify design are higher after the
product has been in production for some time. Don Wells advises writing code for
those features you plan to implement right away rather than writing it in advance
for other future features: “The best approach is to create code only for the features
you are implementing while you search for enough knowledge to reveal the
simplest design. Then refactor incrementally to implement your new
understanding and design.”
 Coding Standards
A team must have common sets of coding practices, using the same formats and
styles for code writing. Application of standards allows all team members to read,
share, and refactor code with ease, track who worked on certain pieces of code, as
well as make the learning faster for other programmers. Code written according to
the same rules encourages collective ownership.
 Collective Code Ownership
This practice declares a whole team’s responsibility for the design of a system. Each
team member can review and update code. Developers that have access to code
won’t get into a situation in which they don’t know the right place to add a new
feature. The practice helps avoid code duplication. The implementation of
collective code ownership encourages the team to cooperate more and feel free to
bring new ideas.
 System Metaphor
System metaphor stands for a simple design that has a set of certain qualities. First,
a design and its structure must be understandable to new people. They should be
able to start working on it without spending too much time examining
specifications. Second, the naming of classes and methods should be coherent.
Developers should aim at naming an object as if it already existed, which makes
the overall system design understandable.
Programmer’s work conditions
 40-Hour Week
XP projects require developers to work fast, be efficient, and sustain the product’s
quality. To adhere to these requirements, they should feel well and rested. Keeping
the work-life balance prevents professionals from burnout. In XP, the optimal
number of work hours must not exceed 45 hours a week. One overtime a week is
possible only if there will be none the week after.
SCRUM
SCRUM is an agile development method which concentrates specifically on
how to manage tasks within a team based development environment. Basically,
Scrum is derived from activity that occurs during a rugby match. Scrum believes in
empowering the development team and advocates working in small teams (say- 7

St. Joseph’s College of Engineering 20


IT8075- Software Project Management Department of IT 2021-2022
to 9 members). It consists of three roles, and their responsibilities are explained as
follows:

In the Scrum model, the team members assume three basic roles: product owner,
scrum master, and team member. The responsibilities associated with these three
basic roles are discussed below.
 Product owner: The product owner represents the customer’s perspective
and guides the team toward building the right software. In other words, in
various meetings the product owner takes the responsibility of communi-
cating the customer’s views to the development team. To this end, in every
sprint the product owner in consultation with the team members defines the
features of the software to be developed in the next sprint, decides on the re-
lease dates, and also reprioritizes the software features if needed.
 Scrum master: The scrum master acts as the project manager for the project.
The responsibilities of the scrum master include removing any impediments
that the project may face, ensuring that the team is fully productive by fos-
tering close cooperation among all team members. Also, the scrum master
acts as a liaison between the customers, top management, and the team and
facilitates the development work. The scrum team is therefore shielded by
the scrum master from external interferences.
 Team member: A scrum team usually consists of cross-functional team
members with expertise in areas such as quality assurance, programming,
user interface design, testing. The team is self-organizing in the sense that
the team members distribute the responsibilities among themselves, in con-

St. Joseph’s College of Engineering 21


IT8075- Software Project Management Department of IT 2021-2022
trast to a conventional team where a team leader decides who will do what.
Artifacts
Product Backlog
This is a repository where requirements are tracked with details on the no of
requirements to be completed for each release. It should be maintained and
prioritized by product owner, and it should be distributed to the scrum team.
Team can also request for a new requirement addition or modification or deletion

Sprint Backlog
The Sprint Backlog is a list of everything that the team commits to achieve in a
given Sprint. Once created, no one can add to the Sprint Backlog except the
Development Team. If the Development Team needs to drop an item from the
Sprint Backlog, they must negotiate it with the Product Owner. During this
negotiation, the ScrumMaster should work with the Development Team and
Product Owner to try to find ways to create some smaller increment of an item
rather than drop it altogether.

Sprint Burndown Chart


Sprint burndowns are a graphical way of showing how much work is remaining in
the sprint, typically in terms of task hours. It is typically updated at the daily
scrum. As the sprint progresses, the amount of work remaining should steadily de-
crease and should trend toward being complete on the last day of the sprint. Burn-
downs that show increasing work or few completed tasks are signals to the
ScrumMaster and the team that the sprint is not going well.

Scrum Practices
Practices are described in detailed:

St. Joseph’s College of Engineering 22


IT8075- Software Project Management Department of IT 2021-2022
Process flow of Scrum:
Process flow of scrum testing is as follows:
 Each iteration of a scrum is known as Sprint
 Product backlog is a list where all details are entered to get end product
 During each Sprint, top items of Product backlog are selected and turned into
Sprint backlog
 Team works on the defined sprint backlog
 Team checks for the daily work
 At the end of the sprint, team delivers product functionality
The term scrum ceremonies is used to denote the meetings that are mandatorily held
during the duration of a project. The scrum ceremonies include three different
types of meetings: sprint planning, daily scrum, and sprint review meetings.
 Sprint planning: During the sprint planning meeting, the team members
commit to develop and deliver certain features in the ensuing sprint, out of
those listed in the product backlog. In this meeting, the product owner
works with the team to negotiate which product backlog items the team
should work on in the next sprint in order to meet the release goals. It is the
responsibility of the scrum master to ensure that the team agrees to realistic
goals for a sprint.
 Daily scrum: The daily scrum is a short stand-up meeting conducted every
day morning to review the status of the progress achieved and the major is-
sues being faced on a day to day basis. The daily scrum meeting is not a
problem solving meeting, rather each member updates the teammates about
what he has achieved in the previous day. Each team member focuses on an-
swering three questions: What did he do yesterday? What will he do today?
What obstacles are in his way? The daily scrum meeting helps the scrum
master to track the progress made so far and helps to address any problems
needing immediate attention. Also the team members get an appraisal of the
project status and any specific problems that are being faced. To keep the
meeting short (usually 15 or 20 minutes) and focused, this meeting is con-
ducted with the team members standing up.
 Sprint review meeting: At the end of each sprint, a sprint review meeting is
conducted. In this meeting, the team demonstrates the new functionality de-
veloped during the sprint that was just completed to the product owner and
to the stakeholders. Feedback is collected from the participants of the meet-
ing and these are either taken into account in the next sprint or are added to
the product backlog.

Managing interactive processes


Booch supports the iterative and incremental development of a system.
He defines two processes describing the layout of Object Oriented development:
Macro process
 Establish core requirements (conceptualization).

St. Joseph’s College of Engineering 23


IT8075- Software Project Management Department of IT 2021-2022
 Develop a model of the desired behavior (analysis).
 Create an architecture (design).
 Evolve the implementation (evolution).
 Manage post delivery evolution (maintenance).
Micro process
 Identify the classes and objects at a given level of abstraction.
 Identify the semantics of these classes and objects.
 Identify the relationships among these classes and objects.
 Specify the interface and then the implementation of these classes and ob-
jects.

 In principle, the micro process represent the daily activity of the individual
developer, or of a small team of developers. Here the analysis and design
phases are intentionally blurred. Stroustrup observes that: "There are
no cookbook methods that can replace intelligence, experience, and good
taste in design and programming...The different phases of a software project,
such as design, programming, and testing cannot be strictly separated".
 The macro process serves as the controlling framework of the micro process.
It represents the activities of the entire development team on the scale of
weeks to months at a time. The basic philosophy of the macro process is that
of incremental development: the system as a whole is built up step by step,
each successive version consisting of the previous ones plus a number of new
functions.
Basis of software estimation
The four basic steps in software project estimation are:
1) Estimate the size of the development product. This generally ends up in either
Lines of Code (LOC) or Function Points (FP), but there are other possible units of
measure. A discussion of the pros & cons of each is discussed in some of the
material referenced at the end of this report.
St. Joseph’s College of Engineering 24
IT8075- Software Project Management Department of IT 2021-2022
2) Estimate the effort in person-months or person-hours.
3) Estimate the schedule in calendar months.
4) Estimate the project cost in dollars (or local currency)
Information about past projects
 Need to collect performance details about past project: how big were they?
How much effort/time did they need?
 Differences in environmental factors such as the programming languages
used and the experience of staff.
 Collect the data from International Database maintained by International
Software Benchmarking Standards Group (ISBSG) Contains data from 4800
projects
Parameters to be Estimated
⚫ Size is a fundamental measure of work
⚫ Based on the estimated size, two parameters are estimated:
◦ Effort and Duration
⚫ Effort is measured in person-months:
◦ One person-month is the effort an individual can typically put in a
month.
⚫ Duration is always measure in months. Work-month (wm) is a popular unit
for effort measurement. Also Person-month (pm) is also frequently used to
mean same as the work-month
Measure of effort
⚫ “Cost varies as product of men and months, progress does not.”
◦ Hence the man-month as a unit for measuring the size of job is a
dangerous and deceptive myth.
⚫ The myth of additional manpower
◦ Brooks Law: “Adding manpower to a late project makes it later”
Mythical Man-Month

For tasks with complex interrelationship, addition of manpower to a late project


does not help.
Measure of Work
St. Joseph’s College of Engineering 25
IT8075- Software Project Management Department of IT 2021-2022
⚫ The project size is a measure of the problem complexity in terms of the effort
and time required to develop the product.
⚫ Two metrics are used to measure project size:
◦ Source Lines of Code (SLOC)
◦ Function point (FP)
⚫ FP is now-a-days favoured over SLOC:
◦ Because of the many shortcomings of SLOC.

Effort and cost estimation techniques


Barry Bochm. in his classic work on software effort models, identified the main
ways of deriving estimates of software development effort as:
(1) Algorithmic A model is developed using historical cost information which
cost modeling relates some software metric (usually its size) to the project cost.
An estimate is made of that metric and the model predicts the
effort required.
(2) Expert One or more experts on the software development techniques to
judgement be used and on the application domain are consulted. They each
estimate the project cost and the final cost estimate is arrived at
by consensus.
(3) Estimation This technique is applicable when other projects in the same ap-
by analogy plication domain have been completed. The cost of a new project
is estimated by analogy with these completed projects.
(4) Parkinson's Parkinson's Law states that work expands to fill the time availa-
Law ble. In software costing, this means that the cost is determined by
available resources rather than by objective assessment. If the
software has to be delivered in 12 months and 5 people are avail-
able, the effort required is estimated to be 60 person-months.
(5) Pricing to The software cost is estimated to be whatever the customer has
win available to spend on the project. The estimated effort depends
on the customer's budget and not on the software functionality.
(6) Top-down A cost estimate is established by considering the overall func-
estimation tionality of the product and how that functionality is provided
by interacting sub-functions. Cost estimates are made on the ba-
sis of the logical function rather than the components implement-
ing that function.
(7) Bottom-up The cost of each component is estimated. All these costs are add-
estimation ed to produce a final cost estimate.
Clearly, the 'Parkinson' method is not really an effort prediction method, but a
method of setting the scope of a project. Similarly, 'price to win' is a way of
deciding a price and not a prediction method. On these grounds. Boehm rejects
them as prediction techniques although they might have some value as

St. Joseph’s College of Engineering 26


IT8075- Software Project Management Department of IT 2021-2022
management techniques. There is, for example, a perfectly acceptable engineering
practice of 'design to cost' which is one example of the broader approach of 'design
by objectives'.

Bottom-up estimating
 Estimating methods can be generally divided into bottom-up and top-down
approaches. With the bottom-up approach, the estimator breaks the project
into its component tasks and then estimates how much effort will be required
to carry out each task.
 With a large project, the process of breaking down into tasks would be a
repetitive one: each task would be analysed into its component sub-tasks and
these in turn would be further analysed. This is repeated until you get to
components that can be executed by a single person in about a week or two.
 The reader might wonder why this is not called a top-down approach: after
all you are starting from the top and working down! Although this top-down
analysis is an essential precursor to bottom-up estimating, it is really a
separate one - that of producing a Work Breakdown Structure (WBS). The
bottom-up part comes in adding up the calculated effort for each activity to
get an overall estimate.
 The bottom-up approach is most appropriate at the later, more detailed,
stages of project planning. If this method is used early on in the project cycle
then the estimator will have to make some assumptions about the
characteristics of the final system, for example the number and size of
software modules. These will be working assumptions that imply no
commitment when it comes to the actual design of the system.
 Where a project is completely novel or there is no historical data available,
the estimator would be well advised to use the bottom-up approach.

The top-down approach and parametric models


 Top-down estimates Produce overall estimate using effort driver(s) and
distribute proportions of overall estimate to components

 The top-down approach is normally associated with parametric (or


algorithmic) models. These may be explained using the analogy of estimating
the cost of rebuilding a house. This would be of practical concern to a house-
owner who needs sufficient insurance cover to allow for rebuilding the

St. Joseph’s College of Engineering 27


IT8075- Software Project Management Department of IT 2021-2022
property if it were destroyed. Unless the house-owner happens to be in the
building trade it is unlikely that he or she would be able to work out how
many bricklayer-hours, how many carpenter-hours, electrician-hours and so
on would be required. Insurance companies, however, produce convenient
tables where the house-owner can find an estimate of rebuilding costs based
on such parameters as the number of storeys and the floor space that a house
has. This is a simple parametric model.

 The effort needed to implement a project will be related mainly to variables


associated with characteristics of the final system. The form of the parametric
model will normally be one or more formulae in the form:
effort = (system size) x (productivity rate)

 For example, system size might be in the form 'thousands of lines of code'
(KLOC) and the productivity rate 40 days per KLOC. The values to be used
will often be matters of subjective judgement.
 A model to forecast software development effort therefore has two key
components. The first is a method of assessing the size of the software
development task to be undertaken. The second assesses the rate of work at
which the task can be done.
 For example. Amanda at IOE might estimate that the first software module to
be constructed is 2 KLOC. She might then judge that if Kate undertook the
development of the code, with her expertise she could work at a rate of 40
days per KLOC and complete the work in 2 x 40 days, that is. 80 days, while
Ken. who is less experienced, would need 55 days per KLOC and take 2 x 55
that is, 110 days to complete the task.
 Some parametric models, such as that implied by function points, are focused
on system or task size, while others, such are COCOMO are more concerned
with productivity factors.
 Having calculated the overall effort required, the problem is then to allocate
proportions of that effort to the various activities within that project.
 The top-down and bottom-up approaches are not mutually exclusive. Project
managers will probably try to get a number of different estimates from
different people using different methods. Some parts of an overall estimate
could be derived using a top-down approach while other parts could be
calculated using a bottom-up method.
 At the earlier stages of a project, the top-down approach would tend to be
used, while at later stages the bottom-up approach might be preferred.

Expert judgement
⚫ Asking someone who is familiar with and knowledgeable about the
application area and the technologies to provide an estimate
⚫ Particularly appropriate where existing code is to be modified
St. Joseph’s College of Engineering 28
IT8075- Software Project Management Department of IT 2021-2022
⚫ Research shows that experts judgement in practice tends to be based on
analogy

Estimating by analogy
⚫ It is also called case-based reasoning .
⚫ For a new project the estimator identifies the previous completed projects
that have similar characteristics to it .
⚫ The new project is referred to as the target project or target case
⚫ The completed projects are referred to as the source projects or source case
⚫ The effort recorded for the matching source case is used as the base estimate
for the target project
⚫ The estimator calculates an estimate for the new project by adjusting the (
base estimate ) based on the differences that exist between the two projects

Example
Assume that cases are matched on the basis of two parameters , the number of
inputs and the number of outputs .
• The new project ( target case ) requires 7 inputs and 15 output
• You are looking into two past cases ( source cases ) to find a better analogy with
the target project :
• Project A : has 8 inputs and 17 outputs .
• Project B : has 5 inputs and 10 outputs .
Which is a more closer match for the new project A or project B ?
Distance between new project and project A :
Square-root of (( 7-8 ) 2 + ( 15-17 ) 2 )= 2.24
Distance between new project and project B :
Square-root of (( 7-5 ) 2 + ( 15-10 ) 2 )= 5.39
Project A is a better match because it has less distance than project B to the new
project

Size Oriented Metrics:


i) Source Lines of Code (SLOC): is software metric used to measure the size of
St. Joseph’s College of Engineering 29
IT8075- Software Project Management Department of IT 2021-2022
software program by counting the number of lines in the text of the program’s
source code. This metric does not count blank lines, comment lines, and library.
SLOC measures are programming language dependent. They cannot easily
accommodate nonprocedural languages. SLOC also can be used to measure others,
such as errors/KLOC, defects/KLOC, pages of documentation/KLOC,
cost/KLOC. ii) Deliverable Source Instruction (DSI): i similar to SLOC. The
difference between DSI and SLOC is that s ”if-then-else” statement, it would be
counted as one SLOC but might be counted as several DSI .
Function Oriented Metrics:
Function Point (FP): FP defined by Allan Albrecht at IBM in 1979, is a unit of
measurement to express the amount software functionality . Function point
analysis (FPA) is the method of measuring the size of software. The advantage is
that it can avoid source code error when selecting different programming
languages. FP is programming language independent, making ideal for
applications using conventional and nonprocedural languages. It is base on data
that are more likely to be known early in the evolution of project.

COSMIC full function points


Function Point Analysis (FPA) is one of the most widely used methods to
determine the size of software projects. FPA originated at a time when only a
mainframe environment was available. Sizing of specifications was typically based
on functional decomposition and modelled data. Nowadays, development
methods like Object Oriented, Component Based and RAD are applied more often.
There is also more attention on architecture and the use of client server and
multitier environments. Another development is the growth in complexity caused
by more integrated applications, real‐ time applications and embedded systems
and combinations. FPA was not designed to cope with these various newer
development approaches.

The Common Software Measurement International Consortium (COSMIC), aimed


to develop, test, bring to market and to seek acceptance of a new soft‐ ware sizing
method to support estimating and performance measurement (productivity, time‐to‐
market and quality). The measurement method must be applicable for estimating the
effort for developing and maintaining software in various software domains. Not only
business software (MIS) but also real‐time software (avionics, telecom, process
control) and embedded software (mobile phones, consumer electronics) can be
measured.

The basis for measurement must be found, just as in FPA, in the user requirements
the software must fulfil. The result of the measurement must be independent of the
development environment and the method used to specify these requirements.
Sizes depend only on the user require.

St. Joseph’s College of Engineering 30


IT8075- Software Project Management Department of IT 2021-2022
COSMIC Concepts
The Functional User Requirements (FUR) are, according to the definition of a
functional size measurement method, the basis for measurement. They specify
user’s needs and procedures that the software should fulfil.
The FUR are analysed to identify the functional processes. A Functional Process is
an elementary component of a set of FUR. It is triggered by one or more events in
the world of the user of the software being measured. The process is complete
when it has executed all that is required to be done in response to the triggering
event.
Each functional process consists of a set of sub‐ processes that are either
movements or manipulations of data. Since no‐one knows how to measure data
manipulation, and since the aim is to measure ‘data‐ movement‐rich’ software, the
simplifying assumption is made that each functional process consists of a set of
data movements.
A Data Movement moves one Data Group. A Data Group is a unique cohesive set
of data (attributes) specifying an ‘object of interest’ (i.e. something that is ‘of
interest’ to the user). Each Data Movement is counted as one CFP (COSMIC
function point).
COSMIC recognises 4 (types of) Data Movements:
• Entry moves data from outside into the process
• Exit moves data from the process to the outside world
• Read moves data from persistent storage to the process
• Write moves data from the process to persistent storage.

From a pure size measurement point of view, the most important improvements of
the COSMIC method compared with using traditional Function Points are as
follows
 The COSMIC method was designed to measure the functional requirements
of software in the domains of business application, real-time and infrastructure
software (e.g. operating systems, web components, etc.), in any layer of a multi-
layer architecture and at any level of decomposition. Traditional Function
Points were designed to measure only the functionality ‘seen’ by human users

St. Joseph’s College of Engineering 31


IT8075- Software Project Management Department of IT 2021-2022
of business software in the application layer.
 Traditional Function Points use a size scale with a limited range of possible
sizes for each component. COSMIC functional processes are measured on a con-
tinuous size scale with a minimum of 2 CFP and no upper size limit. Modern
software can have extremely large processes. Individual functional processes of
roughly 100 CFP have been measured in avionics software systems and in pub-
lic national insurance systems. Traditional Function Points can therefore give
highly misleading sizes for certain types of software which means that great
care must be taken when using these sizes for performance measurement or es-
timating
 The COSMIC method gives a much finer measure of the size of any changes
to be made to software than traditional function points. The smallest change
that can be measured with the COSMIC method is 1 CFP.

Users of the COSMIC method have reported the following benefits, compared with
using '1st generation' methods
 Easy to learn and stable due to the principles-based approach, hence 'future
proof' and cost-effective to implement;
 Well-accepted by project staff due to the ease of mapping of the method’s
concepts to modern software requirements documentation methods, and to its
compatibility with modern software architectures;
 Improves estimating accuracy, especially for larger software projects;
 Possible to size requirements automatically that are held in CASE tools;
 Reveals real performance improvement where using traditional function
points has not indicated any improvement due to their inability to recognise
how software processes have increased in size over time;
 Sizing with COSMIC is an excellent way of controlling the quality of the re-
quirements at all stages as they evolve.
COCOMO a parametric model
COCOMO (Constructive Cost Estimation Model) was proposed by Boehm.
According to him, any software development project can be classified into one of
the following three categories based on the development complexity: organic,
semidetached, and embedded. The classification is done considering the
characteristics of the product as well as those of the development team and
development environment. Usually these three product classes correspond to
application, utility and system programs, respectively. Data processing programs
are normally considered to be application programs. Compilers, linkers, etc., are
utility programs. Operating systems and real-time system programs, etc. are
system programs.
The definition of organic, semidetached, and embedded systems are elaborated
below.
 Organic: A development project can be considered of organic type, if the pro-
ject deals with developing a well understood application program, the size of
St. Joseph’s College of Engineering 32
IT8075- Software Project Management Department of IT 2021-2022
the development team is reasonably small, and the team members are experi-
enced in developing similar types of projects.
 Semidetached: A development project can be considered of semidetached
type, if the development consists of a mixture of experienced and inexperi-
enced staff. Team members may have limited experience on related systems
but may be unfamiliar with some aspects of the system being developed.
 Embedded: A development project is considered to be of embedded type, if
the software being developed is strongly coupled to complex hardware, or if
the stringent regulations on the operational procedures exist.
Estimates are required at different stages in the system life cycle and COCOMO II
has been designed to accommodate this by having models for three different
stages.
 Application composition
Where the external features of the system that the users will experience are de-
signed. Prototyping will typically be employed to do this. With small applications
that can be built using high-productivity application-building tools, development
can stop at this point.
 Early design
Where the fundamental software structures are designed. With larger, more de-
manding systems, where, for example, there will be large volumes of transactions
and performance is important, careful attention will need to be paid to the architec-
ture to be adopted.
 Post architecture
Where the software structures undergo final construction, modification and tuning
to create a system that will perform as required.

To estimate the effort for application composition, the counting of object points,
which w ere described earlier, is recommended by the developers of COCOMO II.
At the early design stage. I Ps are recommended as the way of gauging a basic sys-
tem size. An FP count might be converted to a SI.OC equivalent by multiplying the
I Ps by a factor for the programming language that is to be used.

According to Boehm, software cost estimation should be done through three


stages: Basic COCOMO, Intermediate COCOMO, and Complete COCOMO.
Basic COCOMO Model
The basic COCOMO model gives an approximate estimate of the project
parameters. The basic COCOMO estimation model is given by the following
expressions:
Effort = a * (KLOC)b PM
Tdev = 2.5 * (Effort)c Months
where
 KLOC is the estimated size of the software product expressed in Kilo Lines of
Code
St. Joseph’s College of Engineering 33
IT8075- Software Project Management Department of IT 2021-2022
a, b, c are constants for each category of software products
 Tdev is the estimated time to develop the software, expressed in months
 Effort is the total effort required to develop the software product, expressed
in person months (PMs)
The effort estimation is expressed in units of person-months (PM).
The value of the constants a, b, c are given below:
Software
a b c
project
Organic 2.4 1.05 0.38
Semi-detached 3.0 1.12 0.35
Embedded 3.6 1.20 0.32
Intermediate COCOMO Model
The basic COCOMO model assumes that effort and development time are
functions of the product size alone. However, many other project parameters apart
from the product size affect the development effort and time required for the
product. Therefore, in order to obtain an accurate estimation of the effort and
project duration, the effect of all relevant parameters must be taken into account.
The intermediate COCOMO model recognizes this fact and refines the initial
estimate obtained using the basic COCOMO expressions by using a set of 15 cost
drivers (multipliers) based on various attributes of software development. For
example, if modern programming practices are used, the initial estimates are
scaled downward by multiplication with a cost driver having a value less than 1.
Each of the 15 attributes receives a rating on a six-point scale that ranges from
"very low" to "extra high" (in importance or value) as shown below. An effort
multiplier from the table below [i] applies to the rating. The product of all effort
multipliers results in an Effort Adjustment Factor (EAF).
Ratings
Cost Drivers Very Very Extra
Low Nominal High
Low High High
Product attributes
Required software reliability 0.75 0.88 1.00 1.15 1.40
Size of application database 0.94 1.00 1.08 1.16
Complexity of the product 0.70 0.85 1.00 1.15 1.30 1.65
Hardware attributes
Run-time performance constraints 1.00 1.11 1.30 1.66
Memory constraints 1.00 1.06 1.21 1.56
Volatility of the virtual machine environment 0.87 1.00 1.15 1.30
Required turnabout time 0.87 1.00 1.07 1.15
Personnel attributes
Analyst capability 1.46 1.19 1.00 0.86 0.71
Applications experience 1.29 1.13 1.00 0.91 0.82

St. Joseph’s College of Engineering 34


IT8075- Software Project Management Department of IT 2021-2022
Software engineer capability 1.42 1.17 1.00 0.86 0.70
Virtual machine experience 1.21 1.10 1.00 0.90
Programming language experience 1.14 1.07 1.00 0.95
Project attributes
Application of software engineering methods 1.24 1.10 1.00 0.91 0.82
Use of software tools 1.24 1.10 1.00 0.91 0.83
Required development schedule 1.23 1.08 1.00 1.04 1.10
EAF is used to refine the estimates obtained by basic COCOMO as follows:
Effort|corrected = Effort * EAF
Tdev|corrected = 2.5 * (Effort|corrected) c
Complete COCOMO Model
Both the basic and intermediate COCOMO models consider a software product as
a single homogeneous entity. However, most large systems are made up several
smaller sub-systems, each of them in turn could be of organic type, some
semidetached, or embedded. The complete COCOMO model takes into account
these differences in characteristics of the subsystems and estimates the effort and
development time as the sum of the estimates for the individual subsystems. This
approach reduces the percentage of error in the final estimate.
The following development project can be considered as an example application of
the complete COCOMO model. A distributed Management Information System
(MIS) product for an organization having offices at several places across the
country can have the following sub-components:
 Database part
 Graphical User Interface (GUI) part
 Communication part
Of these, the communication part can be considered as embedded software. The
database part could be semi-detached software, and the GUI part organic software.
The costs for these three components can be estimated separately, and summed up
to give the overall cost of the system.
Uses of COCOMO II Model

St. Joseph’s College of Engineering 35

You might also like