Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

SE - Module 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Module 3

Software Estimation Metrics

Management Spectrum
The management spectrum describes the management of a software project or how to make a
project successful. It focuses on the four P’s;people, product, process and project.
Here, the manager of the project has to control all these P’s to have a
smooth flow in the project progress and to reach the goal.
The four P’s of management spectrum has been described briefly
in below.
The People:
People of a project includes from manager to developer, from customer toend user.
But mainly people of a project highlight the developers.
It is so important to have highly skilled and motivated developers that theSoftware Engineering
Institute has developed a People Management Capability Maturity Model (PM-CMM), “to
enhance the readiness of software organizations to undertake increasingly complex
applications by helping to attract, grow, motivate, deploy, and retain the talent needed to
improve their software development capability”.
Organizations that achieve high levels of maturity in the people management area have a higher
likelihood of implementing effectivesoftware engineering practices.
The Product:
Product is any software that has to be developed. To develop successfully, product objectives
and scope should be established, alternative solutions should be considered, and technical and
management constraints should be identified.
Without this information, it is impossible to define reasonable and accurate estimates of the
cost, an effective assessment of risk, a realisticbreakdown of project tasks or a manageable
project schedule that provides a meaningful indication of progress.
The Process:
A software process provides the framework from which a comprehensive plan for software
development can be established.
A number of different tasks sets— tasks, milestones, work products, andquality assurance
points—enable the framework activities to be adapted to the characteristics of the software
project and the requirements of theproject team.
Finally, umbrella activities overlay the process model. Umbrella activitiesare independent of
any one framework activity and occur throughout theprocess.
The Project:
Here, the manager has to do some job. The project includes all and everything of the total
development process and to avoid project failurethe manager has to take some steps, has to
be concerned about some common warnings etc.

Process Metrics:

Private process metrics (e.g. defect rates by individual or module) are knownonly to the
individual or team concerned.

Public process metrics enable organizations to make strategic changes toimprove the software
process.

Metrics should not be used to evaluate the performance of individuals.

Statistical software process improvement helps and organization to


discoverwhere they are strong and where are week.

In-process quality metrics deals with the tracking of defect arrival duringformal machine
testing for some organizations. This metric includes −

 Defect density during machine testing


 Defect arrival pattern during machine testing
 Phase-based defect removal pattern
 Defect removal effectiveness

Defect density during machine testing


Defect rate during formal machine testing (testing after code is integrated into the system
library) is correlated with the defect rate in the field. Higher defect rates found during testing
is an indicator that the software has experienced higher error injection during its development
process, unless the higher testing defect rate is due to an extraordinary testing effort.
This simple metric of defects per KLOC or function point is a good indicator of quality, while
the software is still being tested. It is especiallyuseful to monitor subsequent releases of a
product in the same development organization.

Defect arrival pattern during machine testing


The overall defect density during testing will provide only the summaryof the defects. The
pattern of defect arrivals gives more information about different quality levels in the field. It
includes the following −
 The defect arrivals or defects reported during the testing phase by time interval (e.g.,
week). Here all of which will not be validdefects.
 The pattern of valid defect arrivals when problem determination is done on the reported
problems. This is the true defect pattern.
 The pattern of defect backlog overtime. This metric is neededbecause development
organizations cannot investigate and fix all the reported problems immediately. This
is a workload statementas well as a quality statement. If the defect backlog is large at
the end of the development cycle and a lot of fixes have yet to be integrated into the
system, the stability of the system (hence its quality) will be affected. Retesting
(regression test) is needed to ensure that targeted product quality levels are reached.

Phase-based defect removal pattern


This is an extension of the defect density metric during testing. Inaddition to testing, it tracks
the defects at all phases of the development cycle, including the design reviews, code
inspections, and formal verifications before testing.
Because a large percentage of programming defects is related to design problems, conducting
formal reviews, or functional verifications to enhance the defect removal capability of the
process at the front-end reduces error in the software. The pattern of phase-based defect
removalreflects the overall defect removal ability of the development process.
With regard to the metrics for the design and coding phases, in addition to defect rates, many
development organizations use metrics such as inspection coverage and inspection effort for
in-process quality management.

Defect removal effectiveness


It can be defined as follows −
DRE=Defect removed during a development phase Defectslatent in the product×100%

This metric can be calculated for the entire development process, for the front-end before code
integration and for each phase. It is called early defect removal when used for the front-
end and phase effectiveness for specific phases.

The higher the value of the metric, the more effective the development
process and the fewer the defects passed to the next phase or to the field. This metric is a key
concept of the defect removal model for software development.

These are metrics that pertain to Process Quality. They are used to measure the
efficiency and effectiveness of various processes.
1. Cost of quality: It is a measure of the performance of quality initiatives in an
organization. It’s expressed in monetary terms.

Cost of quality = (review + testing + verification review + verificationtesting + QA +


configuration management + measurement + training
+ rework review + rework testing)/ total effort x 100.

2. Cost of poor quality: It is the cost of implementing imperfect processes and


products.

Cost of poor quality = rework effort/ total effort x 100.

Project Metrics

These are metrics that relate to Project Quality. They are used to quantify defects, cost,
schedule, productivity and estimation of variousproject resources and deliverables.

• Software project metrics are used by the software team to adapt


project workflow and technical activities.

• Project metrics are used to avoid development schedule delays, to mitigate potential
risks, and to assess product quality on an on-goingbasis.

• Every project should measure its inputs (resources), outputs


(deliverables), and results (effectiveness of deliverables)

This metrics describe the project characteristics and execution. Examplesinclude the number of
software developers, the staffing pattern over the life cycle of the software, cost, schedule, and
productivity.

1. Schedule Variance: Any difference between the scheduled completionof an activity and
the actual completion is known as Schedule Variance.

Schedule variance = ((Actual calendar days – Planned calendar days) + Start variance)/
Planned calendar days x 100.

2. Effort Variance: Difference between the planned outlined effort andthe effort required
to actually undertake the task is called Effort variance.

Effort variance = (Actual Effort – Planned Effort)/ Planned Effort x 100.

Software Project Estimation

Estimation is the process of finding an estimate, or approximation,which is a value that can


be used for some purpose even if input data may be incomplete, uncertain, or unstable.
Estimation determines how much money, effort, resources, and time it will take to build a
specific system or product. Estimation is based on −

 Past Data/Past Experience


 Available Documents/Knowledge
 Assumptions
 Identified Risks
The four basic steps in Software Project Estimation are −

 Estimate the size of the development product.


 Estimate the effort in person-months or person-hours.
 Estimate the schedule in calendar months.
 Estimate the project cost in agreed currency.

1. Lines of Code (LOC): As the name suggest, LOC count thetotal number of lines of
source code in a project.

LOC is the simplest among all metrics available to estimateproject size


Project size is estimated by counting the no of source instructions in the developed
program.

While Counting the no of src instructions, lines used for commenting the code and
header lines should be ignored. Determining the LOC count at the end of a project is a
very simple job. However accurate estimation of the LOC count at thebeginning of a
project is very difficult.
Project managers usually divide the problem into modules, each module into sub-modules
and so on, until the sizes Of the differentleaf-level modules can be approximately predicted.

To be able to do this, past experience in developing similar products is helpful. By Using


the estimation of the lowest level modules, project managers arrive at the total size of
estimation.

The units of LOC are:


KLOC- Thousand lines of code
NLOC- Non comment lines of code
KDSI- Thousands of delivered source instruction
The size is estimated by comparing it with the existing systems of same kind. The experts
use it to predict the required size of various components of software and then add them to
get the totalsize.
Advantages:
Universally accepted and is used in many models like COCOMO.
Estimation is closer to developer’s perspective.
Simple to use.
Disadvantages:

Different programming languages contains different number oflines.


No proper industry standard exist for this technique.
It is difficult to estimate the size using this technique in earlystages of project.
FP

Function Point Analysis was initially developed by Allan J. Albercht in 1979 at IBM and it
has been further modified by the International Function Point Users Group (IFPUG).
Initial Definition given by Allan J. Albrecht:
FPA gives a dimensionless number defined in function points which we have found to be an
effective relative measure of function valuedelivered to our customer.

Function Point Analysis (FPA) is a method or set of rules of Functional Size Measurement.
It assesses the functionality delivered to its users, based on the user’s external view of the
functional requirements. It measures the logical view of an application not the physically
implemented view or the internaltechnical view.

The Function Point Analysis technique is used to analyse the functionality delivered by
software and Unadjusted Function Point(UFP) is the unit of measurement.
Objectives of FPA:
1. The objective of FPA is to measure functionality that the userrequests and receives.
2. The objective of FPA is to measure software development and maintenance
independently of technology used for implementation.
3. It should be simple enough to minimize the overhead of themeasurement process.
4. It should be a consistent measure among various projects andorganizations.
Types of FPA:
1. Transactional Functional Type –
 (i) External Input (EI): EI processes data or control information that
comes from outside the application’s boundary. The EI is an elementary
process.

(ii) External Output (EO): EO is an elementary process thatgenerates data or control


information sent outside the application’s boundary.
 (iii) External Inquiries (EQ): EQ is an elementary processmade up of an input-
output combination that results in data retrieval.
2. Data Functional Type –
 (i) Internal Logical File (ILF): A user identifiable group of logically related data or
control information maintained withinthe boundary of the application.
 (ii) External Interface File (EIF): A group of user recognizable logically related data
allusion to the software butmaintained within the boundary of another software.
 (ii) External Output (EO): EO is an elementary process thatgenerates
data or control information sent outside the application’s boundary.
 (iii) External Inquiries (EQ): EQ is an elementary processmade up of an input-
output combination that results in data retrieval.
3. Data Functional Type –
 (i) Internal Logical File (ILF): A user identifiable group of logically related data or
control information maintained withinthe boundary of the application.
 (ii) External Interface File (EIF): A group of user recognizable logically related data
allusion to the software butmaintained within the boundary of another software.

1. FPs of an application is found out by counting the number and types of functionsused in
the applications. Various functions used in an application can be put under five types, as
shown in Table:

Types of FP Attributes
Measurements Parameters Examples

1.Number of External Inputs(EI) Input screen and tables

2. Number of External Output (EO) Output screens and reports

3. Number of external inquiries (EQ) Prompts and interrupts.

4. Number of internal files (ILF) Databases and directories

5. Number of external interfaces (EIF) Shared databases and shared routines.

All these parameters are then individually assessed for complexity.

The FPA functional units are shown in Fig:

2. FP characterizes the complexity of the software system and hence can be used to depict the
project time and the manpower requirement.

3. The effort required to develop the project depends on what the software does.

4. FP is programming language independent.

5. FP method is used for data processing systems, business systems like information systems.

6. The five parameters mentioned above are also known as information domain
characteristics.
7. All the parameters mentioned above are assigned

Weights of 5-FP Attributes

Measurement Parameter Low Average High

1. Number of external inputs (EI) 7 10 15

2. Number of external outputs (EO) 5 7 10

3. Number of external inquiries (EQ) 3 4 6

4. Number of internal files (ILF) 4 5 7

5. Number of external interfaces (EIF) 3 4 6

The functional complexities are multiplied with the corresponding weights against each
function, and the values are added up to determine the UFP (Unadjusted Function Point)
of the subsystem.

Here that weighing factor will be simple, average, or complex for a measurement parameter
type.

The Function Point (FP) is thus calculated with the following formula.

FP = Count-total * [0.65 + 0.01 * ∑(fi)]


= Count-total * CAF
where Count-total is obtained from the above Table.

CAF = [0.65 + 0.01 *∑(fi)]

and ∑(fi) is the sum of all 14 questionnaires and show the complexity adjustment value/
factor-CAF (where i ranges from 1 to 14). Usually, a student is provided withthe value of
∑(fi)

1. Does there is need of reliable backup and recovery to the system?


2. Is there any requirement of communications?
3. Are there distribute processing functions?
4. Is Performance critical?
5. Will it is possible to execute system in current, greatly utilized operational
environment?
6. Does there is need of an on-line data entry to system?
7. Does there is need of thye input transaction to on-line data entry so as tobuild over
multiple screens or operations?
8. Is the updation of master files is done on-line?
9. Is there is complexity in the inputs, outputs, files or inquiries? 10.Is there
is complexity in internal processing?
11. Can the code designed be reusable?
12. Is there is involvement of conversion and installation in the design?13.Is the
system designed for more than one installation in various
organizations?
14.Is the design of application facilitates change and ease of use by the user?

Example: Compute the function point, productivity, documentation, cost per function
for the following data:

1. Number of user inputs = 24

2. Number of user outputs = 46

3. Number of inquiries = 8

4. Number of files = 4

5. Number of external interfaces = 2

6. Effort = 36.9 p-m

7. Technical documents = 265 pages

8. User documents = 122 pages

9. Cost = $7744/ month

Various processing complexity factors are: 4, 1, 0, 3, 3, 5, 4, 4, 3, 3, 2, 2, 4, 5.

Solution:

Measurement Parameter Count Weighing


factor

1. Number of external inputs (EI) 24 * 4 = 96

2. Number of external outputs (EO) 46 * 4 = 184

3. Number of external inquiries (EQ) 8 * 6 = 48

4. Number of internal files (ILF) 4 * 10 = 40

5. Number of external interfaces (EIF) Count-total 2 * 5 = 10


→ 378

So sum of all fi (i ← 1 to 14) = 4 + 1 + 0 + 3 + 5 + 4 + 4 + 3 + 3 + 2 + 2 + 4 + 5


= 43

FP = Count-total * [0.65 + 0.01 *∑(fi)]


= 378 * [0.65 + 0.01 * 43]
= 378 * [0.65 + 0.43]
= 378 * 1.08 = 408

Total pages of documentation = technical document + user document


= 265 + 122 = 387pages

Documentation = Pages of documentation/FP


= 387/408 = 0.94

Differentiate between FP and LOC

FP LOC

1. FP is specification 1. LOC is an analogy


based. based.

2. FP is language 2. LOC is language


independent. dependent.
3. FP is user-oriented. 3. LOC is design-
oriented.

4. It is extendible to LOC. 4. It is convertible to FP


(backfiring)

Benefits of FPA:
FPA is a tool to determine the size of a purchased application package by counting all
the functions included in the package.
It is a tool to help users discover the benefit of an application package to their
organization by counting functions that specifically match their requirements.
It is a tool to measure the units of a software product to supportquality and productivity
analysis.
It s a vehicle to estimate cost and resources required forsoftware development
and maintenance.
It is a normalization factor for software comparison.

Empirical Estimation Technique

Empirical estimation is a technique or model in which empirically derived formulas are used
for predicting the data that are a required and essential part of the software project planning
step.

These techniques are usually based on the data that is collected previously from a project and
also based on some guesses, prior experience with the development of similar types of
projects, and assumptions. It uses the size of the software to estimate the effort.

For example Delphi technique and Expert Judgementtechnique.


Expert Judgment is a technique in which judgment is provided based upon aspecific set of
criteria and/or expertise that has been acquired in a specific knowledge area, application area,
or product area, a particular discipline, an industry, etc. Such expertise may be provided by any
group or person with specialized education, knowledge, skill, experience, or training.

This knowledge base can be provided by a member of the project team, or multiple members
of the project team, or by a team leader or team leaders.
However, typically expert judgment requires an expertise that is not present withinthe project
team and, as such, it is common for an external group or person with a specific relevant skill
set or knowledge base to be brought in for a consultation,

Such expertise can be provided by any group or individual with specialized


knowledge or training and is available from many sources, including:

Units within the organization;


Consultants;
Stakeholders, including customers or sponsors;
Professional and technical associations;
Industry groups;
Subject matter experts (SME);
Project management office (PMO);
Suppliers.

Delphi Cost Estimation

In this approach some of the shortcomings of the expert judgment approach aretried to resolve.

The Coordinator gives a copy of the Software Requirement Specification (SRS) document
to all of the estimators and a particular form for the purpose of recording their cost estimates.

Individual estimates are completed by all of the estimators and submitted to thcoordinator

In individual estimates, the estimators point out any unusual characteristics regarding the
product which has great impact on the estimation.

By considering summary, all of the estimators re-estimate. This method isrepeated for
several rounds.

However, estimators cannot communicate with each other regarding the estmates,(the
reason is that the estimate of more experienced or senior estimator may influence
other estimators.

After several iteratons of estimations are done, the coordinator compiles the results and
prepares the final estimates.
Project scheduling

Project schedule simply means a mechanism that is used to communicate and know about
that tasks are needed and has to be done or performed and which organizational resources will
be givenor allocated to these tasks and in what time duration or time framework is needed to
be performed.

Defining A Task Set for Software Project

The process of dividing complex projects to simpler and manageable work items is called
Work Breakdown Structure(WBS).

Work item is also called as a Task/

Project managers use this technique for simplification of projectexecution.

WBS is not limited to a specific field, this methodology can be usedfor any type of project
management.

Following are the reasons for creating the WBS in project:


 Do accurate project organization
 Assign accurate task of resposibilties to the project team
 Do accurate estimation of the cost, time and risk involved inthe project.
 Illustrate the project scope.
 Plan the project according to availability of resources
Example of WBS

Project-task scheduling is a significant project planning activity. It comprises deciding which


functions would be taken up when. To schedulethe project plan, a software project manager
wants to do the following:

1. Identify all the functions required to complete the project.


2. Break down large functions into small activities.
3. Determine the dependency among various activities.
4. Establish the most likely size for the time duration required to complete the
activities.
5. Allocate resources to activities.
6. Plan the beginning and ending dates for different activities.
7. Determine the critical path. A critical way is the group of activitiesthat decide the
duration of the project.

Effective project scheduling leads to success of project, reducedcost, and increased customer
satisfaction. Scheduling in project management means to list out activities, deliverables, and
milestones within a project that are delivered.
It contains more notes than your average weekly planner notes. The most common and
important form of project schedule is Ganttchart.

Process :
The manager needs to estimate time and resources of project whilescheduling project.
All activities in project must be arranged in a coherent sequence that means activities should
be arranged in a logical and well- organized manner for easy to understand. Initial estimates
of project can be made optimistically which means estimates can bemade when all favorable
things will happen and no threats or problems take place.
The total work is separated or divided into various small activities or tasks during project
schedule. Then, Project manager will decide time required for each activity or task to get
completed. Even some activities are conducted and performed in parallel for efficient
performance. The project manager should be aware of fact that each stage of project is not
problem-free.

Resources required for Development of Project :


Human effort
Sufficient disk space on server
Specialized hardware
Software technology
Travel allowance required by project staff, etc.
Advantages of Project Scheduling :
There are several advantages provided by project schedule in ourproject management:
It simply ensures that everyone remains on same page as far astasks get completed,
dependencies, and deadlines.
It helps in identifying issues early and concerns such as lack or unavailability of
resources.
It also helps to identify relationships and to monitor process.
It provides effective budget management and risk mitigation.

TimeLine Charts

Timelines are very important in project management because they help to visualize time-related
activities, organization of task, set deadlines and have to define delays.

The diagram are useful for managers who want to get a High-level look attheir tasks or to view
any time-related activities.

Timeline chart helps to visualize 3 main timeframes:

1. Planned time- Actual time in-progress that show how longthe tasks have
been in progress.
2. Forecasted time- Project managers have knowledge regarding that
setting expectations for delivery time issignificant and challenging
concern of management.

It is possible to compare planned and actual end dates and get forecasts.

3. TimeLine Components – are the diagrammatic representations of the tasks.


Timelines may be highly detailed or simple.

Factors:

 Set of tasks and objectives to be completed


 Approved dates and deadlines
 Dependencies between tasks
 Expected duration of tasks
Tracking the Schedule

The project schedule is a road map which defines the tasks and milestones that are to be
tracked and controlled as the project proceeds.

Tracking can be done in a no of different ways:

By calling project status meetings periodically in which eachmember reports


progress of tasks.
Evaluating the reviews during the software engineering process.
Setting the tentative project deadlines that have been completed bythe scheduled dates.
Comparing the real start date to the intended start date for everyproject.
Meeting can be taken informally with resources to obtain their progress on the
given tasks.

All of these tracking techniques are used by experienced project managers.

If things are going well(i.e the project is on schedule and within allocatedbudget, reviews is
also going well), control is normal.

But if the problem occur, project manager has to control them as quicklyas possible.
Earned Value Analysis

Earned Value Analysis (EVA) is an industry standard method of measuring a project's progress
at any given point in time, forecasting its completion date and final cost, and analyzing
variances in the schedule and budget asthe project proceeds.

It compares the planned amount of work with what has actually been completed, to determine
if the cost, schedule, and work accomplished areprogressing in accordance with the plan. As
work is completed, it is considered "earned".

Calculating Earned Value


Earned Value Management measures progress against a baseline. It involves calculating
three key values for each activity in the WBS:
1. The Planned Value (PV),- that portion of the approved cost estimateplanned to be spent
on the given activity during a given period.
2. The Actual Cost (AC—the total of the costs incurred in accomplishing work on the
activity in a given period. This Actual Cost must correspond to whatever was budgeted
for the Planned Value and the Earned Value (e.g. all labor, material, equipment, and
indirect costs).
3. The Earned Value (EV), (formerly known as the budget cost of workperformed )—the
value of the work actually completed.
These three values are combined to determine at that point in
time whether or not work is being accomplished as planned. The most commonly used
measures are the cost variance:
Cost Variance (CV) = EV – AC

and the schedule variance:


Schedule Variance (SV) = EV – PV

These two values can be converted to efficiency indicators to reflect thecost and schedule
performance of the project.

The most commonly used cost-efficiency indicator is the costperformance


index (CPI). It is calculated thus:
CPI = EV / AC

The sum of all individual EV budgets divided by the sum of all individual AC's is known as
the cumulative CPI, and is generally used to forecast thecost to complete a project.
The schedule performance index (SPI), calculated thus:SPI = EV / PV

is often used with the CPI to forecast overall project completionestimates.

A negative schedule variance (SV) calculated at a given point in time means the project is
behind schedule, while a negative cost variance (CV) means the project is over budget.

Tools And Techniques


There are several software packages available which will prepare an earned value
analysis, as follows:
Schedulemaker
Planisware OPX2
RiskTrak
Winsight
Primavera

You might also like