Modelman PDF
Modelman PDF
Modelman PDF
Acknowledgments
This work has been supported both financially and technically by the COCOMO II Program Affiliates: Aerospace, Air Force
Cost Analysis Agency, Allied Signal, AT&T, Bellcore, EDS, E-Systems, GDE Systems, Hughes, IDA, Litton, Lockheed
Martin, Loral, MCC, MDAC, Motorola, Northrop Grumman, Rational, Rockwell, SAIC, SEI, SPC, Sun, TI, TRW, USAF
Rome Lab, US Army Research Labs, Xerox.
Graduate Assistants:
Table of Contents
CHAPTER 1: FUTURE SOFTWARE PRACTICES MARKETPLACE--------------------------------------------------------------1
1.1 OBJECTIVES-----------------------------------------------------------------------------------------------------------------------------------1
1.2 FUTURE MARKETPLACE MODEL-----------------------------------------------------------------------------------------------------------2
CHAPTER 2: COCOMO II STRATEGY AND RATIONALE--------------------------------------------------------------------------4
2.1 COCOMO II M ODELS FOR THE SOFTWARE MARKETPLACE SECTORS --------------------------------------------------------------4
2.2 COCOMO II M ODEL RATIONALE AND ELABORATION --------------------------------------------------------------------------------4
2.3 DEVELOPMENT EFFORT ESTIMATES ------------------------------------------------------------------------------------------------------6
2.3.1 Nominal Person Months ----------------------------------------------------------------------------------------------------------7
2.3.2 Breakage ----------------------------------------------------------------------------------------------------------------------------7
2.3.3 Adjusting for Reuse ----------------------------------------------------------------------------------------------------------------7
2.3.4 Adjusting for Re-engineering or Conversion ------------------------------------------------------------------------------- -- 11
2.3.5 Applications Maintenance ------------------------------------------------------------------------------------------------- ----- 12
2.3.6 Adjusting Person Months -------------------------------------------------------------------------------------------------- ----- 13
2.4 DEVELOPMENT SCHEDULE ESTIMATES------------------------------------------------------------------------------------------------- 13
2.4.1 Output Ranges ------------------------------------------------------------------------------------------------------------------- 13
CHAPTER 3: SOFTWARE ECONOMIES AND DISECONOMIES OF SCALE ------------------------------------------------ 15
3.1 APPROACH ---------------------------------------------------------------------------------------------------------------------------------- 15
3.1.1 Previous Approaches ------------------------------------------------------------------------------------------------------------ 15
3.2 SCALING DRIVERS ------------------------------------------------------------------------------------------------------------------------- 16
3.2.1 Precedentedness (PREC) and Development Flexibility (FLEX) ----------------------------------------------------------- 16
3.2.2 Architecture / Risk Resolution (RESL) ------------------------------------------------------------------------------------ ---- 17
3.2.3 Team Cohesion (TEAM) ----------------------------------------------------------------------------------------------------- --- 17
3.2.4 Process Maturity (PMAT) -------------------------------------------------------------------------------------------------- ---- 19
CHAPTER 4: THE APPLICATION COMPOSITION MODEL--------------------------------------------------------------------- 21
4.1 APPROACH ---------------------------------------------------------------------------------------------------------------------------------- 21
4.2 OBJECT POINT COUNTING PROCEDURE ------------------------------------------------------------------------------------------------ 21
CHAPTER 5: THE EARLY DESIGN MODEL ---------------------------------------------------------------------------------------- 24
5.1 COUNTING WITH FUNCTION POINTS----------------------------------------------------------------------------------------------------- 24
5.2 COUNTING PROCEDURE FOR UNADJUSTED FUNCTION POINTS --------------------------------------------------------------------- 25
5.3 CONVERTING FUNCTION POINTS TO LINES OF CODE --------------------------------------------------------------------------------- 26
5.4 COST DRIVERS ----------------------------------------------------------------------------------------------------------------------------- 26
5.4.1 Overall Approach: Personnel Capability (PERS) Example ---------------------------------------------------------------- 27
5.4.2 Product Reliability and Complexity (RCPX) -------------------------------------------------------------------------------- - 28
5.4.3 Required Reuse (RUSE)--------------------------------------------------------------------------------------------------------- 28
5.4.4 Platform Difficulty (PDIF) ----------------------------------------------------------------------------------------------------- 28
5.4.5 Personnel Experience (PREX) ------------------------------------------------------------------------------------------------- 29
5.4.6 Facilities (FCIL) ----------------------------------------------------------------------------------------------------------------- 29
5.4.7 Schedule (SCED) ---------------------------------------------------------------------------------------------------------------- 29
CHAPTER 6: THE POST-ARCHITECTURE MODEL ------------------------------------------------------------------------------ 31
6.1 LINES OF CODE COUNTING RULES ------------------------------------------------------------------------------------------------------ 31
6.2 FUNCTION POINTS ------------------------------------------------------------------------------------------------------------------------- 33
6.3 COST DRIVERS ----------------------------------------------------------------------------------------------------------------------------- 33
6.3.1 Product Factors ------------------------------------------------------------------------------------------------------------------ 33
6.3.2 Platform Factors ----------------------------------------------------------------------------------------------------------------- 34
ii
1.1 Objectives
The initial definition of COCOMO II and its rationale are described in this paper. The definition will be refined as additional
data are collected and analyzed. The primary objectives of the COCOMO II effort are:
To develop a software cost and schedule estimation model tuned to the life cycle practices of the 1990s and 2000s.
To develop software cost database and tool support capabilities for continuous model improvement.
To provide a quantitative analytic framework, and set of tools and techniques for evaluating the effects of software
technology improvements on software life cycle costs and schedules.
These objectives support the primary needs expressed by software cost estimation users in a recent Software Engineering
Institute survey [Park et al. 1994]. In priority order, these needs were for support of project planning and scheduling, projec t
staffing, estimates-to-complete, project preparation, replanning and rescheduling, project tracking, contract negotiation,
proposal evaluation, resource leveling, concept exploration, design evaluation, and bid/no-bid decisions. For each of these
needs, COCOMO II will provide more up-to-date support than the original COCOMO and Ada COCOMO predecessors.
End-User Programming
(55,000,000 performers in US)
Application Generators
and Composition Aids
(600,000)
Application
Composition
(700,000)
System Integration
(700,000)
Infrastructure
(750,000)
Figure 1: Future Software Practices Marketplace Model
End-User Programming will be driven by increasing computer literacy and competitive pressures for rapid, flexible, and userdriven information processing solutions. These trends will push the software marketplace toward having users develop most
information processing applications themselves via application generators. Some example application generators are
spreadsheets, extended query systems, and simple, specialized planning or inventory systems. They enable users to determine
their desired information processing application via domain-familiar options, parameters, or simple rules. Every enterprise
from Fortune 100 companies to small businesses and the U.S. Department of Defense will be involved in this sector.
Typical Infrastructure sector products will be in the areas of operating systems, database management systems, user interface
management systems, and networking systems. Increasingly, the Infrastructure sector will address "middleware" solutions for
such generic problems as distributed processing and transaction processing. Representative firms in the Infrastructure sector
are Microsoft, NeXT, Oracle, SyBase, Novell, and the major computer vendors.
In contrast to end-user programmers, who will generally know a good deal about their applications domain and relatively little
about computer science, the infrastructure developers will generally know a good deal about computer science and relatively
little about applications. Their product lines will have many reusable components, but the pace of technology (new processor,
memory, communications, display, and multimedia technology) will require them to build many components and capabilities
from scratch.
Performers in the three intermediate sectors in Figure 1 will need to know a good deal about computer science-intensive
Infrastructure software and also one or more applications domains. Creating this talent pool is a major national challenge.
These figures are judgment-based extensions of the Bureau of Labor Statistics moderate-growth labor distribution scenario
for the year 2005 [CSTB 1993; Silvestri and Lukaseiwicz 1991]. The 55 million End-User programming figure was obtained
by applying judgment based extrapolations of the 1989 Bureau of the Census data on computer usage fractions by occupation
[Kominski 1991] to generate end-user programming fractions by occupation category. These were then applied to the 2005
occupation-category populations (e.g., 10% of the 25M people in "Service Occupations"; 40% of the 17M people in
"Marketing and Sales Occupations"). The 2005 total of 2.75 M software practitioners was obtained by applying a factor of 1.6
to the number of people traditionally identified as "Systems Analysts and Computer Scientists"
The Application Generators sector will create largely prepackaged capabilities for user programming. Typical firms operating
in this sector are Microsoft, Lotus, Novell, Borland, and vendors of computer-aided planning, engineering, manufacturing,
and financial analysis systems. Their product lines will have many reusable components, but also will require a good deal of
new-capability development from scratch. Application Composition Aids will be developed both by the firms above and by
software product-line investments of firms in the Application Composition sector.
The Application Composition sector deals with applications which are too diversified to be handled by prepackaged solutions,
but which are sufficiently simple to be rapidly composable from interoperable components. Typical components will be
graphic user interface (GUI) builders, database or object managers, middleware for distributed processing or transaction
processing, hypermedia handlers, smart data finders, and domain-specific components such as financial, medical, or industrial
process control packages.
Most large firms will have groups to compose such applications, but a great many specialized software firms will provide
composed applications on contract. These range from large, versatile firms such as Andersen Consulting and EDS, to small
firms specializing in such specialty areas as decision support or transaction processing, or in such applications domains as
finance or manufacturing.
The Systems Integration sector deals with large scale, highly embedded, or unprecedented systems. Portions of these systems
can be developed with Application Composition capabilities, but their demands generally require a significant amount of upfront systems engineering and custom software development. Aerospace firms operate within this sector, as do major system
integration firms such as EDS and Andersen Consulting, large firms developing software-intensive products and services
(telecommunications, automotive, financial, and electronic products firms), and firms developing large-scale corporate
information systems or manufacturing support systems.
Key the structure of COCOMO II to the future software marketplace sectors described above;
Key the inputs and outputs of the COCOMO II submodels to the level of information available;
Figure 2, extended from [Boehm 1981, p. 311], indicates the effect of project uncertainties on the accuracy of software size
and cost estimates. In the very early stages, one may not know the specific nature of the product to be developed to better than
a factor of 4. As the life cycle proceeds, and product decisions are made, the nature of the products and its consequent size are
better known, and the nature of the process and its consequent cost drivers2 are better known. The earlier "completed
programs" size and effort data points in Figure 2 are the actual sizes and efforts of seven software products built to an
imprecisely-defined specification [Boehm et al. 1984]3. The later "USAF/ESD proposals" data points are from five proposals
submitted to the U.S. Air Force Electronic Systems Division in response to a fairly thorough specification [Devenny 1976].
4x
Size (DSI)
Completed
Programs
2x
+ Cost ($)
USAF/ESD
Proposals
+
1.5x
Relative
Size
Range
1.25x
+
+
+
+
+
+
+
+
+
+
0.5x
Concept of
Operation
0.25x
Feasability
Product
Design
Spec.
Rqts.
Spec.
Plans
and
Rqts.
Product
Design
Detail
Design
Spec.
Detail
Design
Accepted
Software
Devel.
and
Test
A cost driver refers to a particular characteristic of the software development that has the effect of increasing or decreasing
the amount of development effort, e.g. required product reliability, execution time constraints, project team application
experience.
These seven projects implemented the same algorithmic version of the Intermediate COCOMO cost model, but with the use
of different interpretations of the other product specifications: produce a "friendly user interface" with a "single-user file
system."
Third, given the situation in premises 1 and 2, COCOMO II enables projects to furnish coarse-grained cost driver information
in the early project stages, and increasingly fine-grained information in later stages. Consequently, COCOMO II does not
produce point estimates of software cost and effort, but rather range estimates tied to the degree of definition of the estimation
inputs. The uncertainty ranges in Figure 2 are used as starting points for these estimation ranges.
With respect to process strategy, Application Generator, System Integration, and Infrastructure software projects will involve
a mix of three major process models, The appropriate models will depend on the project marketplace drivers and degree of
product understanding.
The Application Composition model involves prototyping efforts to resolve potential high-risk issues such as user interfaces,
software/system interaction, performance, or technology maturity. The costs of this type of effort are best estimated by the
Applications Composition model.
The Early Design model involves exploration of alternative software/system architectures and concepts of operation. At this
stage, not enough is generally known to support fine-grain cost estimation. The corresponding COCOMO II capability
involves the use of function points and a course-grained set of 7 cost drivers (e.g. two cost drivers for Personnel Capability
and Personnel Experience in place of the 6 COCOMO II Post-Architecture model cost drivers covering various aspects of
personnel capability, continuity, and experience).
The Post-Architecture model involves the actual development and maintenance of a software product. This stage proceeds
most cost-effectively if a software life-cycle architecture has been developed; validated with respect to the systems mission,
concept of operation, and risk; and established as the framework for the product. The corresponding COCOMO II model has
about the same granularity as the previous COCOMO and Ada COCOMO models. It uses source instructions and / or
function points for sizing, with modifiers for reuse and software breakage; a set of 17 multiplicative cost drivers; and a set of
5 factors determining the projects scaling exponent. These factors replace the development modes (Organic, Semidetached,
or Embedded) in the original COCOMO model, and refine the four exponent-scaling factors in Ada COCOMO.
To summarize, COCOMO II provides the following three-stage series of models for estimation of Application Generator,
System Integration, and Infrastructure software projects:
1. The earliest phases or spiral cycles will generally involve prototyping, using the Application Composition model
capabilities. The COCOMO II Application Composition model supports these phases, and any other prototyping activities
occurring later in the life cycle.
2. The next phases or spiral cycles will generally involve exploration of architectural alternatives or incremental development
strategies. To support these activities, COCOMO II provides an early estimation model called the Early Design model. This
level of detail in this model is consistent with the general level of information available and the general level of estimation
accuracy needed at this stage.
3. Once the project is ready to develop and sustain a fielded system, it should have a life-cycle architecture, which provides
more accurate information on cost driver inputs, and enables more accurate cost estimates. To support this stage, COCOMO
II provides the Post-Architecture model.
The above should be considered as current working hypotheses about the most effective forms for COCOMO II. They will be
subject to revision based on subsequent data analysis. Data analysis should also enable the further calibration of the
relationships between object points, function points, and source lines of code for various languages and composition systems,
enabling flexibility in the choice of sizing parameters.
PM NOMINAL = A ( Size) B
EQ 1.
2.3.2 Breakage
COCOMO II uses a breakage percentage, BRAK, to adjust the effective size of the product. Breakage reflects the
requirements volatility in a project. It is the percentage of code thrown away due to requirements volatility. For example, a
project which delivers 100,000 instructions but discards the equivalent of an additional 20,000 instructions has a BRAK value
of 20. This would be used to adjust the projects effective size to 120,000 instructions for a COCOMO II estimation. The
BRAK factor is not used in the Applications Composition model, where a certain degree of product iteration is expected, and
included in the data calibration.
It does not go through the origin. There is generally a cost of about 5% for assessing, selecting, and assimilating the
reusable component.
Small modifications generate disproportionately large costs. This is primarily due to two factors: the cost of
understanding the software to be modified, and the relative cost of interface checking.
Data on 2954
NASA modules
[Selby,1988]
1.0
1.0
0.70
0.75
0.55
Relative
cost
0.5
Usual Linear
Assumption
0.25
0.046
0.25
0.5
0.75
1.0
Amount Modified
Figure 3: Nonlinear Reuse Effects
[Parikh and Zvegintzov 1983] contains data indicating that 47% of the effort in software maintenance involves understanding
the software to be modified. Thus, as soon as one goes from unmodified (black-box) reuse to modified-software (white-box)
reuse, one encounters this software understanding penalty. Also, [Gerlich and Denskat 1994] shows that, if one modifies k out
of m software module the number N of module interface checks required is N = k * (m-k) + k * (k-1)/2. Figure 4 shows this
relation between the number of modules modified k and the resulting number of module interface checks required. The shape
of this curve is similar for other values of m. It indicates that there are nonlinear effects involved in the module interface
checking which occurs during the design, code, integration, and test of modified software.
The size of both the software understanding penalty and the module interface checking penalty can be reduced by good
software structuring. Modular, hierarchical structuring can reduce the number of interfaces which need checking [Gerlich and
Denskat 1994], and software which is well structured, explained, and related to its mission will be easier to understand.
COCOMO II reflects this in its allocation of estimated effort for modifying reusable software.
A Reuse Model
The COCOMO II treatment of software reuse uses a nonlinear estimation model, Equation 2. This involves estimating the
amount of software to be adapted, ASLOC, and three
45
45
44
40
39
35
30
30
25
N
20
17
15
10
5
0
0
10
Very Low
Structure
Application
Clarity
Low
Nom
High
Very High
Very low
cohesion, high
coupling,
spaghetti code.
Moderately low
cohesion, high
coupling.
Strong modularity,
information hiding in
data / control
structures.
No match
between
program and
application
ld i
Some correlation
between program
and application.
Moderate
correlation
between program
and application.
Good correlation
between program
and application.
SelfDescriptiveness
Obscure code;
documentation
missing,
obscure or
obsolete
SU Increment to
ESLOC
Some code
commentary and
headers; some
useful
documentation.
Moderate level of
code commentary,
headers,
documentations.
40
30
50
Good code
commentary and
headers; useful
documentation;
some weak areas.
20
Self-descriptive code;
documentation up-todate, well-organized,
with design rationale.
10
The other nonlinear reuse increment deals with the degree of Assessment and Assimilation (AA) needed to determine whether
a fully-reused software module is appropriate to the application, and to integrate its description into the overall product
description. Table 2 provides the rating scale and values for the assessment and assimilation increment. AA is a percentage.
AA Increment
0
2
4
6
8
Level of AA Effort
None
Basic module search and documentation
Some module Test and Evaluation (T&E), documentation
Considerable module T&E, documentation
Extensive module T&E, documentation
The amount of effort required to modify existing software is a function not only of the amount of modification (AAF) and
understandability of the existing software (SU), but also of the programmers relative unfamiliarity with the software (UNFM).
The UNFM parameter is applied multiplicatively to the software understanding effort increment. If the programmer works
with the software every day, the 0.0 multiplier for UNFM will add no software understanding increment. If the programmer
has never seen the software before, the 1.0 multiplier will add the full software understanding effort increment. The rating of
UNFM is in Table 3.
UNFM Increment
Level of Unfamiliarity
0.0
0.2
Completely familiar
Mostly familiar
0.4
Somewhat familiar
0.6
Considerably familiar
0.8
Mostly unfamiliar
1.0
Completely unfamiliar
ESLOC =
EQ 2.
10
Equation 2 is used to determine an equivalent number of new instructions, equivalent source lines of code (ESLOC). ESLOC
is divided by one thousand to derive KESLOC which is used as the COCOMO size parameter. The calculation of ESLOC is
based on an intermediate quantity, the Adaptation Adjustment Factor (AAF). The adaptation quantities, DM, CM, IM are
used to calculate AAF where :
DM: Percent Design Modified. The percentage of the adapted softwares design which is modified in order to adapt
it to the new objectives and environment. (This is necessarily a subjective quantity.)
CM: Percent Code Modified. The percentage of the adapted softwares code which is modified in order to adapt it to
the new objectives and environment.
IM: Percent of Integration Required for Modified Software. The percentage of effort required to integrate the
adapted software into an overall product and to test the resulting product as compared to the normal amount of
integration and test effort for software of comparable size.
If there is no DM or CM (the component is being used unmodified) then there is no need for SU. If the code is being
modified then SU applies.
PM no min al
AT
ASLOC 100
= A ( Size) B +
ATPROD
EQ 3.
The NIST case study also provides useful guidance on estimating the AT factor, which is a strong function of the difference
between the boundary conditions (e.g., use of COTS packages, change from batch to interactive operation) of the old code
and the re-engineered code. The NIST data on percentage of automated translation (from an original batch processing
application without COTS utilities) are given in Table 4 [Ruhl and Gunn 1991].
Re-engineering Target
Batch processing
Batch with SORT
Batch with DBMS
AT (% automated translation)
96%
90%
88%
11
EQ 4.
The percentage of change to the base code is called the Maintenance Change Factor (MCF). The MCF is similar to the
Annual Change Traffic in COCOMO 81, except that maintenance periods other than a year can be used. Conceptually the
MCF represents the ratio in Equation 5:
MCF =
SizeAdded + SizeModified
BaseCodeSize
EQ 5.
Equation 6 is used when the fraction of code added or modified to the existing base code during the maintenance period is
known. Deleted code is not counted.
EQ 6.
The size can refer to thousands of source lines of code (KSLOC), Function Points, or Object Points. When using Function
Points or Object Points, it is better to estimate MCF in terms of the fraction of the overall application being changed, rather
than the fraction of inputs, outputs, screens, reports, etc. touched by the changes. Our experience indicates that counting the
items touched can lead to significant over estimates, as relatively small changes can touch a relatively large number of items.
The initial maintenance size estimate (described above) is adjusted with a Maintenance Adjustment Factor (MAF), Equation
7. COCOMO 81 used different multipliers for the effects of Required Reliability (RELY) and Modern Programming Practices
(MODP) on maintenance versus development effort. COCOMO II instead used the Software Understanding (SU) and
Programmer Unfamiliarity (UNFM) factors from its reuse model to model the effects of well or poorly
structured/understandable software on maintenance effort.
SU
MAF = 1 +
UNFM
100
EQ 7.
The resulting maintenance effort estimation formula is the same as the COCOMO II Post-Architecture development model:
PM M = A (Size M ) EM i
B
17
EQ 8.
i =1
The COCOMO II approach to estimating either the maintenance activity duration, TM, or the average maintenance staffing
level, FSPM, is via the relationship:
PM M = TM FSPM
EQ 9.
Most maintenance is done as a level of effort activity. This relationship can estimate the level of effort, FSPM, given TM (as in
annual maintenance estimates, where TM = 12 months), or vice-versa (given a fixed maintenance staff level, FSPM, determine
the necessary time, TM, to complete the effort).
12
PM adjusted = PM no min al EM i
i
EQ 10.
SCED%
100
EQ 11.
where TDEV is the calendar time in months from the determination of a products requirements baseline to the completion of
an acceptance activity certifying that the product satisfies its requirements. PM is the estimated person-months excluding the
SCED effort multiplier, B is the sum of project scale factors (discussed in the next chapter) and SCED% is the compression /
expansion percentage in the SCED effort multiplier in Table 21.
As COCOMO II evolves, it will have a more extensive schedule estimation model, reflecting the different classes of process
model a project can use; the effects of reusable and COTS software; and the effects of applications composition capabilities.
13
Optimistic Estimate
Pessimistic Estimate
1
2
3
0.50 E
0.67 E
0.80 E
2.0 E
1.5 E
1.25 E
14
15
B = 101
. + 0.01 Wi
EQ 12.
For example, if scale factors with an Extra High rating are each assigned a weight of (0), then a 100 KSLOC project with
Extra High ratings for all factors will have Wi = 0, B = 1.01, and a relative effort E = 1001.01= 105 PM. If scale factors
with Very Low rating are each assigned a weight of (5), then a project with Very Low (5) ratings for all factors will have Wi=
25, B = 1.26, and a relative effort E = 331 PM. This represents a large variation, but the increase involved in a one-unit
change in one of the factors is only about 4.7%.
Scale Factors
(Wi)
PREC
Very Low
Low
Nominal
FLEX
thoroughly
unprecedented
rigorous
largely
unprecedented
occasional
relaxation
RESLa
little (20%)
some (40%)
somewhat
unprecedented
some
relaxation
often (60%)
TEAM
very difficult
interactions
some difficult
interactions
basically
cooperative
interactions
High
generally
familiar
general
conformity
generally
(75%)
largely
cooperative
Very High
largely familiar
some
conformity
mostly (90%)
highly
cooperative
Extra High
throughly
familiar
general goals
full (100%)
seamless
interactions
PMAT
16
Feature
Very Low
Nominal / High
Extra High
General
Considerable
Thorough
Moderate
Considerable
Extensive
Extensive
Moderate
Some
Considerable
Some
Minimal
Full
Considerable
Basic
Full
Considerable
Basic
High
Medium
Low
Precedentedness
Organizational understanding of product objectives
Development Flexibility
17
Characteristic
Very Low
Low
Nominal
High
Very High
Extra
High
None
Little
Some
Generally
Mostly
Fully
None
Little
Some
Generally
Mostly
Fully
10
17
25
33
40
20
40
60
80
100
120
None
Little
Some
Good
Strong
Full
Extreme
Significant
Considerable
Some
Little
Very
Little
> 10
Critical
5-10
Critical
2-4
Critical
1
Critical
> 5 NonCritical
< 5 NonCritical
Very Low
Low
Nominal
High
Very High
Extra
HIgh
Consistency of stakeholder
objectives and cultures
Little
Some
Basic
Considerable
Strong
Full
Ability, willingness of
stakeholders to accommodate
other stakeholders objectives
Experience of stakeholders in
operating as a team
Little
Some
Basic
Considerable
Strong
Full
None
Little
Little
Basic
Considerable
Extensive
Stakeholder teambuilding to
achieve shared vision and
commitments
None
Little
Little
Basic
Considerable
Extensive
18
Almost
Always
(>90%)
Frequent
ly (6090%)
About
Half
(40-60%)
Occasion
ally
(10-40%)
Rarely If
Ever
(<10%)
Does Not
Apply
Dont
Know
1 Requirements Management
6 Software Configuration
Management
9 Training Program
19
12 Intergroup Coordination
13 Peer Reviews
14 Quantitative Process
Management
15 Software Quality Management
16 Defect Prevention
Check Almost Always when the goals are consistently achieved and are well established in standard operating
procedures (over 90% of the time).
Check Frequently when the goals are achieved relatively often, but sometimes are omitted under difficult
circumstances (about 60 to 90% of the time).
Check About Half when the goals are achieved about half of the time (about 40 to 60% of the time).
Check Occasionally when the goals are sometimes achieved, but less often (about 10 to 40% of the time).
Check Rarely If Ever when the goals are rarely if ever achieved (less than 10% of the time).
Check Does Not Apply when you have the required knowledge about your project or organization and the KPA, but
you feel the KPA does not apply to your circumstances.
Check Dont Know when you are uncertain about how to respond for the KPA. After the level of KPA compliance is
determined each compliance level is weighted and a PMAT factor is calculated, as in Equation 13. Initially, all KPAs
will be equally weighted.
18 KPA% i 5
5
18
i =1 100
EQ 13.
20
4.1 Approach
Object Point estimation is a relatively new software sizing approach, but it is well-matched to the practices in the Applications
Composition sector. It is also a good match to associated prototyping efforts, based on the use of a rapid-composition
Integrated Computer Aided Software Environment (ICASE) providing graphic user interface builders, software development
tools, and large, composable infrastructure and applications components. In these areas, it has compared well to Function
Point estimation on a nontrivial (but still limited) set of applications.
The [Banker et al. 1991] comparative study of Object Point vs. Function Point estimation analyzed a sample of 19 investment
banking software projects from a single organization, developed using ICASE applications composition capabilities, and
ranging from 4.7 to 71.9 person-months of effort. The study found that the Object Points approach explained 73% of the
variance (R2) in person-months adjusted for reuse, as compared to 76% for Function Points.
A subsequent statistically-designed experiment [Kaufman and Kumar 1993] involved four experienced project managers
using Object Points and Function Points to estimate the effort required on two completed projects (3.5 and 6 actual personmonths), based on project descriptions of the type available at the beginning of such projects. The experiment found that
Object Points and Function Points produced comparably accurate results (slightly more accurate with Object Points, but not
statistically significant). From a usage standpoint, the average time to produce an Object Point estimate was about 47% of the
corresponding average time for Function Point estimates. Also, the managers considered the Object Point method easier to
use (both of these results were statistically significant).
Thus, although these results are not yet broadly-based, their match to Applications Composition software development
appears promising enough to justify selecting Object Points as the starting point for the COCOMO II Applications
Composition estimation model.
NOP: New Object Points (Object Point count adjusted for reuse)
srvr: number of server (mainframe or equivalent) data tables used in conjunction with the SCREEN or REPORT.
clnt: number of client (personal workstation) data tables used in conjunction with the SCREEN or REPORT.
%reuse: the percentage of screens, reports, and 3GL modules reused from previous applications, pro-rated by degree
of reuse.
The productivity rates are based on an analysis of the year-1 and year-2 project data in [Banker et al. 1991]. In year-1, the
CASE tool was itself under construction and the developers were new to its use. The average productivity of NOP/personmonth in the twelve year-1 projects is associated with the Low levels of developer and ICASE maturity and capability. In the
seven year-2 projects, both the CASE tool and the developers capabilities were considerably more mature. The average
21
productivity was 25 NOP/person-month, corresponding with the High levels of developer and ICASE maturity.
As another definitional point, note that the use of the term "object" in "Object Points" defines screens, reports, and 3GL
modules as objects. This may or may not have any relationship to other definitions of "objects", such as those possessing
features such as class affiliation, inheritance, encapsulation, message passing, and so forth. Counting rules for "objects" of
that nature, when used in languages such as C++, will be discussed in the chapter on the Post Architecture model.
1.
Assess Object-Counts: estimate the number of screens, reports, and 3GL components that will comprise this
application. Assume the standard definitions of these objects in your ICASE environment.
2.
Classify each object instance into simple, medium and difficult complexity levels depending on values of
characteristic dimensions. Use the following scheme:
For Screens
For Reports
3.
Total < 4
(< 2 srvr
Total < 8
(2/3 srvr
Total 8+
(> 3 srvr
< 3 clnt)
3-5 clnt)
> 5 clnt)
<3
simple
simple
medium
3-7
simple
medium
>8
medium
difficult
Total < 4
(< 2 srvr
Total < 8
(2/3 srvr
Total 8+
(> 3 srvr
< 3 clnt)
3-5 clnt)
> 5 clnt)
0 or 1
simple
simple
medium
difficult
2 or 3
simple
medium
difficult
difficult
4+
medium
difficult
difficult
Weigh the number in each cell using the following scheme. The weights reflect the relative effort required to implement
an instance of that complexity level.:
Object Type
Screen
Report
Complexity-Weight
Simple
Medium
Difficult
1
2
2
5
3
8
3GL Component
10
4. Determine Object-Points: add all the weighted object instances to get one number, the Object-Point count.
5. Estimate percentage of reuse you expect to be achieved in this project. Compute the New Object Points to be developed,
Equation 14..
EQ 14.
6.
Determine a productivity rate, PROD = NOP / person-month, from the following scheme
22
Very Low
Low
Nominal
High
Very High
13
25
50
PM =
NOP
PROD
EQ 15.
23
Count each unique user data or user control input type that (i) enters the
external boundary of the software system being measured and (ii) adds or
changes data in a logical internal file.
Count each unique user data or control output type that leaves the external
boundary of the software system being measured.
Count each major logical group of user data or control information in the
software system as a logical internal file type. Include each logical file (e.g.,
each logical group of data) that is generated, used, or maintained by the
software system.
Files passed or shared between software systems should be counted as
external interface file types within each system.
Count each unique input-output combination, where an input causes and
generates an immediate output, as an external inquiry type.
Table 10: User Function Types
Each instance of these function types is then classified by complexity level. The complexity levels determine a set of weights,
which are applied to their corresponding function counts to determine the Unadjusted Function Points quantity. This is the
Function Point sizing metric used by COCOMO II. The usual Function Point procedure involves assessing the degree of
influence (DI) of fourteen application characteristics on the software project determined according to a rating scale of 0.0 to
0.05 for each characteristic. The 14 ratings are added together, and added to a base level of 0.65 to produce a general
characteristics adjustment factor that ranges from 0.65 to 1.35.
Each of these fourteen characteristics, such as distributed functions, performance, and reusability, thus have a maximum of
5% contribution to estimated effort. This is inconsistent with COCOMO experience; thus COCOMO II uses Unadjusted
Function Points for sizing, and applies its reuse factors, cost driver effort multipliers, and exponent scale factors to this sizing
quantity.
24
Determine complexity-level function counts. Classify each function count into Low, Average and High complexity levels
depending on the number of data element types contained and the number of file types referenced. Use the following
scheme:
3.
For EO and EQ
Data Elements
File
Types
1 - 19
20 - 50
51+
Low
Low
Avg
2-5
Low
Avg
6+
Avg
High
For EI
Data Elements
1-5
6 - 19
20+
0 or 1
Low
Low
Avg
High
2-3
Low
Avg
High
4+
Avg
High
File
Types
Data Elements
1-4
5 - 15
16+
0 or 1
Low
Low
Avg
High
2-3
Low
Avg
High
High
3+
Avg
High
High
Apply complexity weights. Weight the number in each cell using the following scheme. The weights reflect the relative
value of the function to the user.
Function Type
Low
Complexity-Weight
Average
High
Internal Logical
External Interfaces
7
5
10
7
15
10
External Inputs
External Outputs
External Inquiries
4. Compute Unadjusted Function Points. Add all the weighted functions counts to get one number, the Unadjusted Function
Points.
Note: The word file refers to a logically related group of data and not the physical implementation of those groups of data.
25
Language
SLOC / UFP
Ada
71
AI Shell
49
APL
32
Assembly
320
Assembly (Macro)
213
ANSI/Quick/Turbo Basic
64
Basic - Compiled
91
Basic - Interpreted
128
128
C++
29
ANSI Cobol 85
91
Fortan 77
105
Forth
64
Jovial
105
Lisp
64
Modula 2
80
Pascal
91
Prolog
64
Report Generator
80
Spreadsheet
26
Counterpart Combined
Post-Architecture Cost Drivers
RCPX
RUSE
PDIF
PERS
PREX
FCIL
TOOL, SITE
SCED
SCED
Extra
Low
Very Low
Low
Nominal
High
Very
High
Extra High
3, 4
5, 6
7, 8
10, 11
12, 13
14, 15
20%
39%
45%
55%
65%
75%
85%
45%
30%
20%
12%
9%
5%
4%
27
Extra
Low
Very
Low
Low
Nominal
High
Very High
Extra High
5, 6
7, 8
9 - 11
12
13 - 15
16 - 18
19 - 21
Very
little
Very
simple
Small
Little
Some
Basic
Strong
Extreme
Simple
Some
Moderate
Complex
Small
Small
Moderate
Large
Very
Strong
Very
complex
Very Large
Extremely
complex
Very Large
Very Low
RUSE
Low
Nominal
High
Very High
Extra High
none
across project
across program
across product
line
across multiple
product lines
28
Low
Nominal
High
Very High
Extra High
10 - 12
13 - 15
16, 17
50%
50%
65%
80%
90%
Very stable
Stable
Somewhat volatile
Volatile
Highly volatile
Platform volatility
Extra
Low
Very
Low
Low
Nominal
High
Very High
Extra
High
3, 4
5, 6
7, 8
10, 11
12, 13
14, 15
Applications,
Platform,
Language and Tool Experience
3 mo.
5 months
9 months
1 year
2 years
4 years
6 years
29
Extra Low
Very Low
Low
Nominal
High
Very
High
Extra High
4, 5
7, 8
9, 10
11
Minimal
Some
Simple
CASE tool
Good;
moderatel
Strong;
moderatel
Strong; well
integrated
Weak
support of
complex
multisite
development
Some
support of
complex
M/S devel.
Some
support of
moderately
complex
M/S devel.
Basic
support of
moderatel
y complex
M/S devel.
Strong
support of
moderatel
y complex
M/S devel.
Strong
support of
simple
M/S devel.
Very strong
support of
collocated or
simple M/S
devel.
SCED
Very Low
Low
Nominal
High
Very High
75% of nominal
85%
100%
130%
160%
Extra High
30
31
Originator: COCOMO II
Measurement Unit:
Statement Type
Definition
Data Array
1. Programmed
2. Generated with source code generators
3. Converted with automated translators
4. Copied or reused without change
5. Modified
6. Removed
Origin
Definition
Data Array
Includes Excludes
1
2
3
4
5
6
7
8
Includes Excludes
Includes Excludes
32
Very Low
RELY
slight
inconvenience
Low
low, easily
recoverable
losses
Nominal
High
Very High
moderate, easily
recoverable
losses
high financial
loss
risk to
human life
Extra High
D
DataBaseSize( Bytes)
=
P {Pr ogramSize( SLOC )
EQ 16.
33
DATA is rated as low if D/P is less than 10 and it is very high if it is greater than 1000.
Very Low
DATA
Low
Nominal
High
Very High
DB bytes/
Pgm SLOC <
10
D/P 1000
Extra High
Very Low
RUSE
Low
none
Nominal
across
project
High
across
program
Very High
across
product line
Extra High
across multiple product
lines
Very Low
DOCU
Low
Some lifecycle needs
uncovered
Nominal
Right-sized
to life-cycle
needs
High
Very High
Excessive for
life-cycle
needs
Very excessive
for life-cycle
needs
Extra High
34
Very Low
Low
TIME
Nominal
High
Very High
Extra High
70%
85%
95%
Very Low
Low
STOR
Nominal
High
Very High
Extra High
50% use of
available storage
70%
85%
95%
Very Low
PVOL
Low
Nominal
High
Very High
major: 6 mo.;
minor: 2 wk.
major: 2 mo.;
minor: 1 wk.
major: 2 wk.;
minor: 2 days
Extra High
35
ACAP
Very Low
Low
Nominal
High
Very High
Extra High
15th percentile
35th percentile
55th percentile
75th percentile
90th percentile
PCAP
Very Low
Low
Nominal
High
Very High
Extra High
15th percentile
35th percentile
55th percentile
75th percentile
90th percentile
AEXP
Very Low
Low
Nominal
High
Very High
2 months
6 months
1 year
3 years
6 years
Extra High
PEXP
Very Low
Low
Nominal
High
Very High
2 months
6 months
1 year
3 years
6 year
Extra High
36
LTEX
Very Low
Low
Nominal
High
Very High
2 months
6 months
1 year
3 years
6 year
Extra High
PCON
Very Low
Low
Nominal
High
Very High
48% / year
24% / year
12% / year
6% / year
3% / year
Extra High
Very Low
TOOL
edit, code,
debug
Low
simple,
frontend,
backend
CASE, little
integration
Nominal
High
Very High
basic lifecycle
tools,
moderately
integrated
strong, mature
lifecycle tools,
moderately
integrated
strong, mature,
proactive
lifecycle tools,
well integrated
with processes,
methods, reuse
Extra High
SITE:
Communications
Very Low
Low
Nominal
High
Very High
Extra High
Some phone,
mail
Individual
phone, FAX
Narrowband
email
Wideband
electronic
communication.
Wideband elect.
comm,
occasional video
conf.
Interactive
multimedia
37
SCED
Very Low
Low
Nominal
High
Very High
75% of nominal
85%
100%
130%
160%
Extra High
38
Control Operations
Computational
Operations
Device-dependent
Operations
Data Management
Operations
User Interface
Management
Operations
Very Low
Evaluation of
simple expressions:
e.g., A=B+C*(D-E)
Low
Straightforward nesting of
structured programming
operators. Mostly simple
predicates
Evaluation of
moderate-level
expressions: e.g.,
D=SQRT(B**24.*A*C)
No cognizance
needed of particular
processor or I/O
device
characteristics. I/O
done at GET/PUT
level.
Use of simple
graphic user
interface (GUI)
builders.
Nominal
Use of standard
math and statistical
routines. Basic
matrix/vector
operations.
I/O processing
includes device
selection, status
checking and error
processing.
Simple use of
widget set.
High
Basic numerical
analysis:
multivariate
interpolation,
ordinary differential
equations. Basic
truncation, roundoff
concerns.
Operations at
physical I/O level
(physical storage
address
translations; seeks,
reads, etc.).
Optimized I/O
overlap.
Simple triggers
activated by data
stream contents.
Complex data
restructuring.
Widget set
development and
extension. Simple
voice I/O,
multimedia.
Very High
Difficult but
structured numerical
analysis: nearsingular matrix
equations, partial
differential
equations. Simple
parallelization.
Routines for
interrupt diagnosis,
servicing, masking.
Communication
line handling.
Performanceintensive embedded
systems.
Distributed database
coordination.
Complex triggers.
Search optimization.
Moderately
complex 2D/3D,
dynamic graphics,
multimedia.
Extra High
Difficult and
unstructured
numerical analysis:
highly accurate
analysis of noisy,
stochastic data.
Complex
parallelization.
Highly coupled,
dynamic relational
and object structures.
Natural language data
management.
Complex
multimedia, virtual
reality.
39
Very Low
RELY
Low
slight inconvenience
DATA
low, easily
recoverable
losses
DB bytes/
Pgm SLOC <
10
Nominal
High
Very High
moderate, easily
recoverable losses
high financial
loss
risk to human
life
D/P 1000
CPLX
see Table 20
RUSE
DOCU
Many life-cycle
needs uncovered
none
across project
across program
across product
line
Very excessive
for life-cycle
needs
across multiple
product lines
Some life-cycle
needs
uncovered.
Right-sized to
life-cycle needs
Excessive for
life-cycle needs
70%
85%
95%
95%
TIME
50% use of
available
execution time
STOR
50% use of
available storage
major: 6 mo.;
minor: 2 wk.
70%
85%
major: 2 mo.;
minor: 1 wk.
major: 2 wk.;
minor: 2 days
PVOL
major change
every 12 mo.;
minor change
every 1 mo.
ACAP
PCAP
15th percentile
15th percentile
35th percentile
35th percentile
55th percentile
55th percentile
75th percentile
75th percentile
90th percentile
90th percentile
PCON
48% / year
24% / year
12% / year
6% / year
3% / year
AEXP
2 months
6 months
1 year
3 years
6 years
PEXP
2 months
6 months
1 year
3 years
6 year
LTEX
2 months
6 months
1 year
3 years
6 year
TOOL
simple,
frontend,
backend CASE,
little integration
basic lifecycle
tools, moderately
integrated
strong, mature
lifecycle tools,
moderately
integrated
strong, mature,
proactive
lifecycle tools,
well integrated
with processes,
methods, reuse
SITE:
Collocation
SITE:
Communications
International
Multi-city and
Multi-company
Individual
phone, FAX
Multi-city or
Multi-company
Narrowband email
Same city or
metro. area
Wideband
electronic
communication.
85%
100%
130%
Same building
or complex
Wideband
elect. comm,
occasional
video conf.
160%
SCED
Extra High
75% of nominal
Fully
collocated
Interactive
multimedia
40
Chapter 7: References
Chapter 7: References
Amadeus (1994), Amadeus Measurement System Users Guide, Version 2.3a, Amadeus Software Research, Inc., Irvine,
California, July 1994.
Banker, R., R. Kauffman and R. Kumar (1994), "An Empirical Test of Object-Based Output Measurement Metrics in a
Computer Aided Software Engineering (CASE) Environment," Journal of Management Information Systems (to appear,
1994).
Banker, R., H. Chang and C. Kemerer (1994a), "Evidence on Economies of Scale in Software Development," Information
and Software Technology (to appear, 1994).
Behrens, C. (1983), "Measuring the Productivity of Computer Systems Development Activities with Function Points," IEEE
Transactions on Software Engineering, November 1983.
Boehm, B. (1981), Software Engineering Economics, Prentice Hall.
Boehm, B. (1983), "The Hardware/Software Cost Ratio: Is It a Myth?" Computer 16(3), March 1983, pp. 78-80.
Boehm, B. (1985), "COCOMO: Answering the Most Frequent Questions," In Proceedings, First COCOMO Users Group
Meeting, Wang Institute, Tyngsboro, MA, May 1985.
Boehm, B. (1989), Software Risk Management, IEEE Computer Society Press, Los Alamitos, CA.
Boehm, B., T. Gray, and T. Seewaldt (1984), "Prototyping vs. Specifying: A Multi-Project Experiment," IEEE Transactions
on Software Engineering, May 1984, pp. 133-145.
Boehm, B., and W. Royce (1989), "Ada COCOMO and the Ada Process Model," Proceedings, Fifth COCOMO Users
Group Meeting, Software Engineering Institute, Pittsburgh, PA, November 1989.
Chidamber, S. and C. Kemerer (1994), "A Metrics Suite for Object Oriented Design," IEEE Transactions on Software
Engineering, (to appear 1994).
Computer Science and Telecommunications Board (CSTB) National Research Council (1993), Computing Professionals:
Changing Needs for the 1990s, National Academy Press, Washington DC, 1993.
Devenny, T. (1976). "An Exploratory Study of Software Cost Estimating at the Electronic Systems Division," Thesis No.
GSM/SM/765-4, Air Force Institute of Technology, Dayton, OH.
Gerlich, R., and U. Denskat (1994), "A Cost Estimation Model for Maintenance and High Reuse," Proceedings, ESCOM
1994, Ivrea, Italy.
Goethert, W., E. Bailey, M. Busby (1992), "Software Effort and Schedule Measurement: A Framework for Counting Staff
Hours and Reporting Schedule Information." CMU/SEI-92-TR-21, Software Engineering Institute, Pittsburgh, PA.
Goudy, R. (1987), "COCOMO-Based Personnel Requirements Model," Proceedings, Third COCOMO Users Group
Meeting, Software Engineering Institute, Pittsburgh, PA, November 1987.
IFPUG (1994), IFPUG Function Point Counting Practices: Manual Release 4.0, International Function Point Users Group,
Westerville, OH.
Jones, C. (1991), Applied Software Measurement, Assuring Productivity and Quality, McGraw-Hill, New York, N.Y.
Kauffman, R., and R. Kumar (1993), "Modeling Estimation Expertise in Object Based ICASE Environments," Stern School
of Business Report, New York University, January 1993.
Kemerer, C. (1987), "An Empirical Validation of Software Cost Estimation Models," Communications of the ACM, May
1987, pp. 416-429.
41
Chapter 7: References
Kominski, R. (1991), Computer Use in the United States: 1989, Current Population Reports, Series P-23, No. 171, U.S.
Bureau of the Census, Washington, D.C., February 1991.
Kunkler, J. (1983), "A Cooperative Industry Study on Software Development/Maintenance Productivity," Xerox Corporation,
Xerox Square --- XRX2 52A, Rochester, NY 14644, Third Report, March 1985.
Miyazaki, Y., and K. Mori (1985), "COCOMO Evaluation and Tailoring," Proceedings, ICSE 8, IEEE-ACM-BCS, London,
August 1985, pp. 292-299.
Parikh, G., and N. Zvegintzov (1983). "The World of Software Maintenance," Tutorial on Software Maintenance, IEEE
Computer Society Press, pp. 1-3.
Park R. (1992), "Software Size Measurement: A Framework for Counting Source Statements." CMU/SEI-92-TR-20, Software
Engineering Institute, Pittsburgh, PA.
Park R, W. Goethert, J. Webb (1994), "Software Cost and Schedule Estimating: A Process Improvement Initiative",
CMU/SEI-94-SR-03, Software Engineering Institute, Pittsburgh, PA.
Paulk, M., B. Curtis, M. Chrissis, and C. Weber (1993), "Capability Maturity Model for Software, Version 1.1", CMU-SEI93-TR-24, Software Engineering Institute, Pittsburgh PA 15213, Feb. 1993.
Paulk, M., C. Weber, S. Garcia, M. Chrissis, and M. Bush (1993a), "Capability Maturity Model for Software, Version 1.1",
CMU-SEI-93-TR-25, Software Engineering Institute, Pittsburgh PA 15213, Feb. 1993
Pfleeger, S. (1991), "Model of Software Effort and Productivity," Information and Software Technology 33 (3), April 1991,
pp. 224-231.
Royce, W. (1990), "TRWs Ada Process Model for Incremental Development of Large Software Systems," Proceedings,
ICSE 12, Nice, France, March 1990.
Ruhl, M., and M. Gunn (1991), "Software Reengineering: A Case Study and Lessons Learned," NIST Special Publication
500-193, Washington, DC, September 1991.
Selby, R. (1988), "Empirically Analyzing Software Reuse in a Production Environment," In Software Reuse: Emerging
Technology, W. Tracz (Ed.), IEEE Computer Society Press, 1988., pp. 176-189.
Selby, R., A. Porter, D. Schmidt and J. Berney (1991), "Metric-Driven Analysis and Feedback Systems for Enabling
Empirically Guided Software Development," Proceedings of the Thirteenth International Conference on Software
Engineering (ICSE 13), Austin, TX, May 13-16, 1991, pp. 288-298.
Silvestri, G. and J. Lukaseiwicz (1991), "Occupational Employment Projections," Monthly Labor Review 114(11), November
1991, pp. 64-94.
42
AA
AAF
AAM
ACAP
Analyst Capability
ACT
AEXP
Applications Experience
ASLOC
AT
Automated Translation
BRAK
Breakage. The amount of controlled change allowed in a software development before requirements are
"frozen."
CASE
CM
CMM
COCOMO
Cost Drivers
A particular characteristic of the software development that has the effect of increasing or decreasing the
amount of development effort, e.g. required product reliability, execution time constraints, project team
application experience.
COTS
CPLX
Product Complexity
CSTB
DATA
Database Size
DBMS
DI
Degree of Influence
DM
DOCU
EDS
ESLOC
FCIL
Facilities
FP
Function Points
GFS
43
GUI
ICASE
IM
KASLOC
KESLOC
KSLOC
LEXP
LTEX
MODP
NIST
NOP
OS
Operating Systems
PCAP
Programmer Capability
PCON
Personnel continuity
PDIF
Platform Difficulty
PERS
Personnel Capability
PEXP
Platform Experience
PL
Product Line
PM
Person Months. A person month is the amount of time one person spends working on the software
development project for one month.
PREX
Personnel Experience
PROD
Productivity rate
PVOL
Platform Volatility
RCPX
RELY
RUSE
Required Reusability
RVOL
Requirements Volatility
SCED
SECU
SEI
SITE
Multi-site operation
SLOC
STOR
SU
T&E
44
TIME
TOOL
TURN
UNFM
Programmer Unfamiliarity
USAF/ESD
VEXP
VIRT
VMVH
VMVT
45
9. Application Composition
New Object Points are determined by:
NOP =
EQ 17.
A productivity rate, PROD, is estimated from a subjective average of developers experience and the ICASE
maturity/capability:
Developers experience
and capability
ICASE maturity and
capability
PROD
Very Low
Low
Nominal
High
Very High
Very Low
Low
Nominal
High
Very High
13
25
50
PM =
NOP
PROD
EQ 18.
46
PM = A [Size ] EM i + PM
B
i =1
where
PM M
AT
ASLOC
100
=
ATPROD
5
B = 101
. + 0.01 SF j
j =1
BRAK
Size = Size 1 +
100
EQ 19.
100 AJ
Size = KNSLOC + KASLOC
( AAM )
100
or
AA + AAF + ( SU ) (UNFM )
, AAF > 0.05
100
47
Symbol
Description
AA
ADAPT
AT
ATPROD
BRAK
CM
DM
EM
IM
KASLOC
KNSLOC
PM
SF
SU
UNFM
48
11. Post-Architecture
Estimate effort with:
PM = A [Size ] EM i + PM
B
17
i =1
where
PM M
AT
ASLOC
100
=
ATPROD
5
B = 101
. + 0.01 SF j
j =1
BRAK
Size = Size 1 +
100
EQ 20.
100 AJ
Size = KNSLOC + KASLOC
( AAM )
100
or
AA + AAF + ( SU ) (UNFM )
, AAF > 0.05
100
49
Symbol
Description
AA
ADAPT
AT
ATPROD
BRAK
CM
DM
EM
IM
KASLOC
KNSLOC
PM
SF
SU
UNFM
50
SCED%
100
where
EQ 21.
5
B = 101
. + 0.01 SF j
j =1
Symbol
Description
SCED%
PM
SF
TDEV
51
Includes
Excludes
the
ascending
1. Executable
2. Non-executable:
3.
Declarations
4.
Compiler directives
5.
Comments:
6.
7.
8.
9.
10.
Blank lines
Includes
5. Modified
6. Removed
15. Origin
1. New work: no prior existence
Excludes
Includes
Excludes
4.
Park R. (1992), Software Size Measurement: A Framework for Counting Source Statements. CMU/SEI92-TR-20, Software Engineering Institute, Pittsburgh, PA.
52
5.
6.
Another product
7.
8.
9.
10.
11.
12.
16. Usage
Includes
Excludes
17. Delivery
Includes
Excludes
1. Delivered:
2.
Delivered as source
3.
4. Not delivered:
5.
6.
18. Functionality
Includes
Excludes
1. Operative
2. Inoperative (dead, bypassed, unused, unreferenced, or unaccessible):
3.
4.
19. Replications
Includes
Excludes
53
Includes
Excludes
Each statement has one and only one status, usually that of its parent unit.
1. Estimated or planned
2. Designed
3. Coded
21. Language
Includes
Excludes
Includes
Excludes
54
Includes
Excludes
23.1 Ada
6. Pragmas
23.2 Assembly
1.
Macro calls
2.
Macro expansions
23.4 CMS-2
1. Keywords like SYS-PROC and SYS-DD
23.5 COBOL
1. "PROCEDURE DIVISION", "END DECLARATIVES", etc.
23.6 FORTRAN
1. END statements
2. Format statements
3. Entry statements
23.7 JOVIAL
1.
55
23.8 PASCAL
1. Executable statements not terminated by semicolons
3. FORWARD declarations
24.2 Declarations
Declarations are nonexecutable program elements that affect an assemblers or compilers interpretation of
other program elements They are used to name, define, and initialize; to specify internal and external
interfaces; to assign ranges for bounds checking; and to identify and bound modules and sections of code.
Examples include declarations of names, numbers, constants, objects, types, subtypes, programs,
subprograms, tasks, exceptions, packages, generics, macros, and deferred constants. Declarations also
include renaming declarations, use clauses, and declarations that instantiate generics. Mandatory begin...end
and {...} symbols that delimit bodies of programs and subprograms are integral parts of program and
subprogram declarations. Language superstructure elements that establish boundaries for different sections
of source code are also declarations. Examples include terms such as PROCEDURE DIVISION, DATA
DIVISION, DECLARATIVES, END DECLARATIVES, INTERFACE, IMPLEMENTATION, SYSPROC and SYS-DD. Declarations, in general, are never required by language specifications to initiate
runtime actions, although some languages permit compilers to implement them that way.
56
Definitions
allocated requirements (system requirements allocated to software) - The subset of the system
requirements that are to be implemented in the software components of the system. The allocated
requirements are a primary input to the software development plan. Software requirements analysis
elaborates and refines the allocated requirements and results in software requirements that are documented.
software plans - The collection of plans, both formal and informal, used to express how software
development and/or maintenance activities will be performed. Examples of plans that could be included:
software development plan, software quality assurance plan, software configuration management plan,
software test plan, risk management plan, and process improvement plan.
software work product - Any artifact created as part of defining, maintaining, or using a software process,
including process descriptions, plans, procedures, computer programs, and associated documentation, which
may or may not be intended for delivery to a customer or end user.
Goals
1.
System requirements allocated to software are controlled to establish a baseline for software
engineering and management use.
2.
Software plans, products, and activities are kept consistent with the system requirements allocated to
software.
57
Definitions
commitment - A pact that is freely assumed, visible, and expected to be kept by all parties.
software plans - The collection of plans, both formal and informal, used to express how software
development and/or maintenance activities will be performed. Examples of plans that could be included:
software development plan, software quality assurance plan, software configuration management plan,
software test plan, risk management plan, and process improvement plan.
Goals
1.
Software estimates are documented for use in planning and tracking the software project.
2.
3.
Affected groups and individuals agree to their commitments related to the software project.
Definitions
commitment - A pact that is freely assumed, visible, and expected to be kept by all parties.
software plans - The collection of plans, both formal and informal, used to express how software
development and/or maintenance activities will be performed. Examples of plans that could be included:
software development plan, software quality assurance plan, software configuration management plan,
software test plan, risk management plan, and process improvement plan.
software work product - Any artifact created as part of defining, maintaining, or using a software process,
including process descriptions, plans, procedures, computer programs, and associated documentation, which
may or may not be intended for delivery to a customer or end user.
Goals
1.
Actual results and performances are tracked against the software plans.
2.
Corrective actions are taken and managed to closure when actual results and performance deviate
significantly from the software plans.
3.
Changes to software commitments are agreed to by the affected groups and individuals.
58
Definitions
documented procedure - A written description of a course of action to be taken to perform a given task
[IEEE-STD-610 Glossary]
event-driven review/activity - A review or activity that is performed based on the occurrence of an event
within the project (e.g., a formal review or the completion of a life-cycle stage).
periodic review/activity - A review/activity that occurs at a specified regular time interval, rather than at
the completion of major events.
Goals
1.
2.
The prime contractor and the software subcontractor agree to their commitments to each other.
3.
The prime contractor and the software subcontractor maintain ongoing communications.
4.
The prime contractor tracks the software subcontractors actual results and performance against its
commitments.
Definitions
audit - An independent examination of a work product or set of work products to assess compliance with
specifications, standards, contractual agreements, or other criteria. [IEEE-STD-610 Glossary]
periodic review/activity - A review/activity that occurs at a specified regular time interval, rather than at
the completion of major events.
policy - A guiding principle, typically established by senior management, which is adopted by an
organization or project to influence and determine decisions.
procedure - A written description of a course of action to be taken to perform a given task.[IEEE-STD-610
Glossary]
software quality assurance (SQA) - (1) A planned and systematic pattern of all actions necessary to
provide adequate confidence that a software work product conforms to established technical requirements.
(2) A set of activities designed to evaluate the process by which software work products are developed
and/or maintained.
standard - Mandatory requirements employed and enforced to prescribe a disciplined, uniform approach to
software development.
59
Goals
1.
2.
Adherence of software products and activities to the applicable standards, procedures, and
requirements is verified objectively.
3. Affected groups and individuals are informed of software quality assurance activities and results.
4.
Noncompliance issues that cannot be resolved within the software project are addressed by senior
management.
Definitions
configuration item - An aggregation of hardware, software, or both, that is designated for configuration
management and treated as a single entity in the configuration management process. [IEEE-STD-610
Glossary]
documented procedure - A written description of a course of action to be taken to perform a given task.
[IEEE-STD-610 Glossary]
software baseline - A set of configuration items (software documents and software components) that has
been formally reviewed and agreed upon, that thereafter serves as the basis for future development, and that
can be changed only through formal change control procedures
software work product - Any artifact created as part of defining, maintaining, or using a software process,
including process descriptions, plans, procedures, computer programs, and associated documentation, which
may or may not be intended for delivery to a customer or end user.
Goals
1. Software configuration management activities are planned.
2. Selected software work products are identified, controlled, and available.
3.
4.
Affected groups and individuals are informed of the status and content of software baselines.
60
Definitions
periodic review/activity - A review/activity that occurs at a specified regular time interval, rather than at
the completion of major events.
software process - A set of activities, methods, practices, and transformations that people use to develop
and maintain software and the associated products (e.g., project plans, design documents, code, test cases,
and user manuals)
software process assessment - An appraisal by a trained team of software professionals to determine the
state of an organizations current software process, to determine the high-priority software process-related
issues facing an organization, and to obtain the organizational support for software process improvement.
software engineering process group (SEPG) - A group of specialists who facilitate the definition,
maintenance and improvement of the software process used by the organization. In the key practices, this
group is generically referred to as "the group responsible for the organizations software process activities."
Goals
1.
Software process development and improvement activities are coordinated across the organization.
2.
The strengths and weaknesses of the software processes used are identified relative to a process
standard.
3.
Definitions
audit - An independent examination of a work product or set of work products to assess compliance with
specifications, standards, contractual agreements, or other criteria.
organizations standard software process - The operational definition of the basic process that guides the
establishment of a common software process across the software projects in an organization. It describes the
fundamental software process elements that each software project is expected to incorporate into its defined
software process. It also describes the relationships (e.g., ordering and interfaces) between these software
process elements.
61
software quality assurance (SQA) - (1) A planned and systematic pattern of all actions necessary to
provide adequate confidence that a software work product conforms to established technical requirements.
(2) A set of activities designed to evaluate the process by which software work products are developed
and/or maintained.
Goals
1.
2.
Information related to the use of the organizations standard software process by the software projects is
collected, reviewed, and made available.
Definitions
periodic review/activity - A review/activity that occurs at a specified regular time interval, rather than at
the completion of major events.
software engineering group (SEG) - The collection of individuals (both managers and technical staff) who
have responsibility for software development and maintenance activities (i.e., requirements analysis, design,
code, and test) for a project. Groups performing software related work, such as the software quality
assurance group, the software configuration management group, and the software engineering process
group, are not included in the software engineering group.
Goals
1.
2.
Training for developing the skills and knowledge needed to perform software management and
technical roles is provided.
3.
Individuals in the software engineering group and software-related groups receive the training
necessary to perform their roles.
Definitions
62
organizations standard software process - The operational definition of the basic process that guides the
establishment of a common software process across the software projects in an organization. It describes the
fundamental software process elements that each software project is expected to incorporate into its defined
software process. It also describes the relationships (e.g., ordering and interfaces) between these software
process elements.
policy - A guiding principle, typically established by senior management, which is adopted by an
organization or project to influence and determine decisions.
projects defined software process - The operational definition of the software process used by a project.
The projects defined software process is a well-characterized and understood software process, described in
terms of software standards, procedures, tools, and methods. It is developed by tailoring the organizations
standard software process to fit the specific characteristics of the project.
tailoring - To modify a process, standard, or procedure to better match process or product requirements.
Goals
1.
The projects defined software process is a tailored version of the organizations standard software
process.
2.
The project is planned and managed according to the projects defined software process.
Definitions
projects defined software process - The operational definition of the software process used by a project.
The projects defined software process is a well-characterized and understood software process, described in
terms of software standards, procedures, tools, and methods. It is developed by tailoring the organizations
standard software process to fit the specific characteristics of the project.
organizations standard software process - The operational definition of the basic process that guides the
establishment of a common software process across the software projects in an organization. It describes the
fundamental software process elements that each software project is expected to incorporate into its defined
software process. It also describes the relationships (e.g., ordering and interfaces) between these software
process elements.
software work product - Any artifact created as part of defining, maintaining, or using a software process,
including process descriptions, plans, procedures, computer programs, and associated documentation, which
may or may not be intended for delivery to a customer or end user.
63
Goals
1.
The software engineering tasks are defined, integrated, and consistently performed to produce the
software.
2.
Definitions
software engineering group (SEG) - The collection of individuals (both managers and technical staff) who
have responsibility for software development and maintenance activities (i.e., requirements analysis, design,
code, and test) for a project. Groups performing software related work, such as the software quality
assurance group, the software configuration management group, and the software engineering process
group, are not included in the software engineering group.
commitment - A pact that is freely assumed, visible, and expected to be kept by all parties.
Goals
1.
2.
The commitments between the engineering groups are agreed to by the affected groups.
3.
Definitions
peer review - A review of a software work product, following defined procedures, by peers of the
producers of the product for the purpose of identifying defects and improvements.
software work product - Any artifact created as part of defining, maintaining, or using a software process,
including process descriptions, plans, procedures, computer programs, and associated documentation, which
may or may not be intended for delivery to a customer or end user
64
Goals
1.
2.
Definitions
organizations standard software process - The operational definition of the basic process that guides the
establishment of a common software process across the software projects in an organization. It describes the
fundamental software process elements that each software project is expected to incorporate into its defined
software process. It also describes the relationships (e.g., ordering and interfaces) between these software
process elements.
process capability - The range of expected results that can be achieved by following a process.
process performance - A measure of the actual results achieved by following a process.
projects defined software process - The operational definition of the software process used by a project.
The projects defined software process is a well-characterized and understood software process, described in
terms of software standards, procedures, tools, and methods. It is developed by tailoring the organizations
standard software process to fit the specific characteristics of the project.
Goals
1.
2.
The process performance of the projects defined software process is controlled quantitatively.
3.
The process capability of the organizations standard software process is known in quantitative terms.
Definitions
software quality goal - Quantitative quality objectives defined for software work product.
Goals
1.
65
2.
Measurable goals for software product quality and their priorities are defined.
3.
Actual progress toward achieving the quality goals for the software products is quantified and
managed.
Definitions
causal analysis meeting - A meeting, conducted after completing a specific task, to analyze defects
uncovered during the performance of that task.
common cause (of a defect) - A cause of a defect that is inherently part of a process or system. Common
causes affect every outcome of the process and everyone working in the process.
Goals
1.
2.
3.
Definitions
documented procedure - A written description of a course of action to be taken to perform a given task.
[IEEE-STD-610 Glossary]
organizations standard software process - The operational definition of the basic process that guides the
establishment of a common software process across the software projects in an organization. It describes the
fundamental software process elements that each software project is expected to incorporate into its defined
software process. It also describes the relationships (e.g., ordering and interfaces) between these software
process elements.
66
Goals
1.
2.
New technologies are evaluated to determine their effect on quality and productivity.
3.
Appropriate new technologies are transferred into normal practice across the organization.
Definitions
documented procedure - A written description of a course of action to be taken to perform a given
task.[IEEE-STD-610 Glossary]
organizations standard software process - The operational definition of the basic process that guides the
establishment of a common software process across the software projects in an organization. It describes the
fundamental software process elements that each software project is expected to incorporate into its defined
software process. It also describes the relationships (e.g., ordering and interfaces) between these software
process elements.
projects defined software process - The operational definition of the software process used by a project.
The projects defined software process is a well-characterized and understood software process, described in
terms of software standards, procedures, tools, and methods. It is developed by tailoring the organizations
standard software process to fit the specific characteristics of the project.
Goals
1.
2.
3.
The organizations standard software process and the projects defined software processes are improved
continuously.
67
W(i)
Very Low
Low
Nominal
High
Very High
Extra High
Precedentedness
4.05
3.24
2.43
1.62
0.81
0.00
Development Flexibility
6.07
4.86
3.64
2.43
1.21
0.00
Architecture / Risk
Resolution
Team Cohesion
4.22
3.38
2.53
1.69
0.84
0.00
4.94
3.95
2.97
1.98
0.99
0.00
Process Maturity
4.54
3.64
2.73
1.82
0.91
0.00
Cost Driver
Rating
Very Low
Low
Nominal
High
Very High
RELY
DATA
0.75
0.88
0.93
1.00
1.00
1.15
1.09
1.39
1.19
CPLX
0.75
0.88
1.00
1.15
1.30
1.66
0.91
1.00
1.14
1.29
1.49
0.95
1.00
1.06
1.13
TIME
1.00
1.11
1.31
1.67
STOR
1.00
1.06
1.21
1.57
0.87
1.00
1.15
1.30
RUSE
DOCU
0.89
PVOL
ACAP
1.50
1.22
1.00
0.83
0.67
PCAP
1.37
1.16
1.00
0.87
0.74
PCON
1.24
1.10
1.00
0.92
0.84
AEXP
1.22
1.10
1.00
0.89
0.81
PEXP
1.25
1.12
1.00
0.88
0.81
LTEX
1.22
1.10
1.00
0.91
0.84
TOOL
1.24
1.12
1.00
0.86
0.72
SITE
1.25
1.10
1.00
0.92
0.84
SCED
1.29
1.10
1.00
1.00
1.00
Extra High
0.78
68