ROI Analysis of The System Architecture Virtual Integration Initiative
ROI Analysis of The System Architecture Virtual Integration Initiative
ROI Analysis of The System Architecture Virtual Integration Initiative
April 2018
TECHNICAL REPORT
CMU/SEI-2018-TR-002
http://www.sei.cmu.edu
REV-03.18.2016.0
Copyright 2018 Carnegie Mellon University. All Rights Reserved.
This material is based upon work funded and supported by the Department of Defense under Contract
No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineer-
ing Institute, a federally funded research and development center.
The view, opinions, and/or findings contained in this material are those of the author(s) and should not
be construed as an official Government position, policy, or decision, unless designated by other docu-
mentation.
This report was prepared for the SEI Administrative Agent AFLCMC/AZS 5 Eglin Street Hanscom
AFB, MA 01731-2100
[DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited
distribution. Please see Copyright notice for non-US Government use and distribution.
Internal use:* Permission to reproduce this material and to prepare derivative works from this material
for internal use is granted, provided the copyright and “No Warranty” statements are included with all
reproductions and derivative works.
External use:* This material may be reproduced in its entirety, without modification, and freely distrib-
uted in written or electronic form without requesting formal permission. Permission is required for any
other external and/or commercial use. Requests for permission should be directed to the Software En-
gineering Institute at permission@sei.cmu.edu.
Carnegie Mellon® is registered in the U.S. Patent and Trademark Office by Carnegie Mellon Univer-
sity.
DM18-0482
Acknowledgments iv
Executive Summary v
Abstract viii
1 Introduction 1
6 ROI Estimates 23
8 Conclusion 30
References 32
Figure 3: Rework Cost-Avoidance as a Function of Reuse for Three Project Sizes with 30% and 50%
Rework 22
Figure 4: Projected Arithmetic ROI as a Function of Reuse for Three Project Sizes with 30% and
50% Rework 26
Figure 5: Projected Logarithmic ROI as a Function of Reuse for Three Project Sizes with 30% and
50% Rework 26
Figure 6: Computed NPV as a Function of Reuse for Three Project Sizes with 30% and 50%
Rework 26
Table 2: Summary of Switch Settings in COCOMO II for the SAVI Cost Model 11
Table 3: Size of Subsystems with Respect to Criticality and Size of Reused Code Base in
MSLOC 12
Table 4: Estimated Software-Development Cost in Millions of US$, Given MSLOC and Amount of
Reuse Using the “As-Is” Process 13
Table 10: Expected Removal Efficiency of Faults and Defects When Deploying SAVI 19
This work was conducted in 2008-2009 with funding from the Aerospace Vehicle Systems Insti-
tute (AVSI) and performed by the authors along with members of AVSI, including David Redman
(AVSI), Don Ward (AVSI), John Chilenski (Boeing), Keith Appleby (BAE Systems), Leon Cor-
ley (Lockheed Martin), Bruce Lewis (U.S. Army Aviation and Missile Research Development
and Engineering Center Software Engineering Directorate, Department of Defense), Jean-Jacques
Toumazet (Airbus), John Glenski (Rockwell Collins), Joe Shultz (GE Aviation), Bob Manners
(FAA), and Manni Papadopoulos (FAA). The authors would also like to thank the members of the
AVSI group for their comments and feedback.
The ROI study report was originally published as an AVSI System Architecture Virtual Integra-
tion (SAVI) report. To reach a wider audience, AVSI and the Software Engineering Institute
(SEI) made the agreement to republish the study as an SEI technical report.
At the time of the study, Jörgen Hansson was a member of the technical staff at the SEI. Steven
Helton retired from Boeing in 2015.
The size of aerospace software, as measured in source lines of code (SLOC), has grown rapidly.
Airbus and Boeing data show that software SLOC have doubled every four years. The current
generation of aircraft software exceeds 25 million SLOC (MSLOC). These systems must satisfy
safety-critical, embedded, real-time, and security requirements. Consequently, they cost signifi-
cantly more than general-purpose systems. Their design is more complex due to quality attribute
requirements, high connectivity among subsystems, and sensor dependencieseach of which af-
fects all system-development phases but especially design, integration, and verification and vali-
dation.
Several analyses of software-development projects show that detecting and removing defects are
the most expensive and time-consuming parts of the work. Finding and fixing defects alone often
causes projects to overrun budget and schedule because developers must perform significant
amounts of rework. The basis of their problem is that most defects are introduced in the pre-cod-
ing phases, specifically during requirements and design, but only a fraction are detected and ad-
dressed in the same phase. More than half are not found until hardware/software integration oc-
curs. For aerospace and safety-critical systems, the cost of removing a defect introduced in pre-
coding phases but detected in post-coding phases is two orders of magnitude greater than the cost
of removing it before code development.
The System Architecture Virtual Integration (SAVI) initiative is a multiyear, multimillion dollar
program for developing the capability to virtually integrate systems. The capability promises to
allow developers to recognize system-level problems early and reduce leakage of errors into the
post-coding phases. The program is sponsored by the Aerospace Vehicle Systems Institute
(AVSI), a research center of the Texas Engineering Experiment Station, which is a member of the
Texas A&M University System. Members of AVSI include Airbus, BAE Systems, Boeing, the
Department of Defense, Embraer, the Federal Aviation Administration, General Electric,
Goodrich, Aerospace, Hamilton Sundstrand, Honeywell International, Lockheed Martin, NASA,
and Rockwell Collins.
This report presents an analysis of the economic effects of the SAVI approach on the development
of software-reliant systems for aircraft compared to existing development paradigms. It describes
the results of a return-on-investment (ROI) analysis to determine the net present value (NPV) of
the investment in the SAVI approach. The investment into the approach over the multi-year SAVI
initiative by the different member companies was estimated to be $86M. Their investment covers
the maturation, adaptation, and piloting of SAVI practices and technologies, and the transition of
the approach into member companies. The analysis uses conservative estimates of costs and bene-
fits to establish a lower bound on the ROI; less conservative figures yield higher economic gains.
The approach taken in this study was to determine the rework cost-avoidance based on SAVI
practice by applying an efficiency rate for removing defects to the rework cost of a system com-
pared to current practice. The approach included the following conservative assumptions:
• We adopted COnstructive COst MOdel (COCOMO) II, the leading tool for estimating soft-
ware development costs using the current development process. Using typical development
processes, we derived the total cost for developing three software systems of different sizes
as follows:
− Each system consists of three types of subsystemssafety critical, highly critical, and
less critical with code bases of 30%, 30%, and 40%, respectively, of the total code
basea typical mix in aircraft industry. This let us differentiate the cost of subsystems
with respect to their requirements.
− Each subsystem is developed with both new code and the reuse of existing code. We
considered three cases of new code development and varied the proportions of code re-
use from 30% to 70%.
− We used three system sizes: two based on the current generation of aircraft software sys-
tems (27 and 30 MSLOC) and one reflecting a future system of 60 MSLOC. The syn-
thetic system clearly illustrates the economic impact of system growth, although build-
ing a system of this size is unaffordable.
− The nominal labor rate is $28,200 per month for 2014, based on 2006 data of $22,800 =
$150/hr. * 152 hr./mo. * 1.02694*(2014 – 2006), adjusted for annual inflation at
2.694%.
• On the basis of SAVI members’ experiences, we estimated the total system-development
cost from an estimate of the software-development cost using a multiplier of 1.55, which re-
flects software development making up about 66% of system-development cost.
• On the basis of documented and experiential evidence for aerospace systems, we used two
conservative estimates for total rework cost: 30% and 50% of the total system-development
cost.
• We determined ROI and NPV based on rework cost reduction attributed to earlier discovery
of defects and did not include reductions in maintenance cost and deployment delays. We
limited rework cost savings to discovery of requirements errors, which make up 35% of all
errors and 79% of the rework cost.
• We used experts’ estimates of the efficiency rate for removing defects of 66% as well as a
reduced rate of 33% for more conservative estimates.
• We assumed that SAVI practices of model creation and analysis would replace existing doc-
ument-based practices of system requirement and design specification at a similar cost.
• We used $86M as estimated investment by the SAVI member companies over multiple years
to mature SAVI and transition current practice to SAVI.
The predicted returns were considered to be higher than anticipated, which led to several follow-
on activities. First, one of the SAVI system integrator members obtained an independent assess-
ment by its organization’s cost estimating group that agreed with the findings of this report. Sec-
ond, this initial ROI study was followed by a second SAVI ROI study. In the second study, a
Monte Carlo algorithm was used to drive the COCOMO II cost estimation, resulting in a reduced
variation of results. In addition, the commercial tool SEER was used to build a SEER-SEM and
SEER-H model of a Boeing 777-200 to explicitly estimate the cost of the non-software portion of
the system and compare it to both publicly available data and estimates of the original SAVI ROI
study presented here. The SEER analysis confirmed that the cost multiplier of 1.55 was accepta-
ble for 2010. Unfortunately, the software count increases while the physical parts count remains
stable, resulting in a software increase from 66% in 2010 to 88% of the total system-development
cost by 2024.
The System Architecture Virtual Integration (SAVI) initiative is a multiyear, multimillion dollar
program that is developing the capability to virtually integrate systems before designs are imple-
mented and tested on hardware. The purpose of SAVI is to develop a means of countering the
costs of exponentially increasing complexity in modern aerospace software systems. The program
is sponsored by the Aerospace Vehicle Systems Institute, a research center of the Texas Engineer-
ing Experiment Station, which is a member of the Texas A&M University System. This report
presents an analysis of the economic effects of the SAVI approach on the development of soft-
ware-reliant systems for aircraft compared to existing development paradigms. The report de-
scribes the detailed inputs and results of a return-on-investment (ROI) analysis to determine the
net present value of the investment in the SAVI approach. The ROI is based on rework cost-
avoidance attributed to earlier discovery of requirements errors through analysis of virtually inte-
grated models of the embedded software system expressed in the SAE International Architecture
Analysis and Design Language (AADL) standard architecture modeling language. The ROI anal-
ysis uses conservative estimates of costs and benefits, especially for those parameters that have a
proven, strong correlation to overall system-development cost. The results of the analysis, in part,
show that the nominal cost reduction for a system that contains 27 million source lines of code
would be $2.391 billion (out of an estimated $9.176 billion), a 26.1% cost savings. The original
study, reported here, had a follow-on study to validate and further refine the estimated cost sav-
ings.
Analysis of software-development projects shows that detecting, locating, and removing defects
are the most expensive and time-consuming parts of the work. Finding and fixing defects often
cause projects to run over budget and schedule as developers perform significant amounts of re-
work in later phases of product development [RTI 2002, Dabney 2003]. For information technol-
ogy (IT) applications, the defect-removal efficiency before delivery is generally about 80% to
85%; the cost of correcting these defects averages about 35% of the total system-development
cost [Jones 2007]. Correspondingly, the time needed to rework defects averages approximately
35% of the total project-development schedule.
Experts have observed that the rework fraction of total development work increases with the size
of the project and can be as high as 60% to 80% for very large projects [Basili 1994, 2001; Jones
1996; Cross 2002]. Aerospace software systems, in particular, have grown at a rapid pace. Airbus
and Boeing data presented in this report show that growth in millions of source lines of code
(MSLOC) for aircraft software will double every four years, and current-generation software ex-
ceeds 25 MSLOC. Safety-critical system design is intrinsically more complex than general-pur-
pose system design because of the quality attribute requirements, high connectivity among sub-
systems, and sensor dependencies that affect all system-development phases but are most critical
to design, integration, and verification and validation activities.
The main problem is clear: most defects are introduced in the early pre-coding phases of develop-
ment, such as requirements and design, but the majority of defects are detected and removed in
post-coding phases, such as integration and testing. The nominal cost of removing a defect intro-
duced in pre-coding phases and detected in post-coding phases is generally one order of magni-
tude higher than the cost of removing it prior to code development. For safety-critical systems, the
difference can be as much as two orders of magnitude higher. In this report, we present such data
from multiple sources.
Software systems are growing in size, not only in the number of subsystems but also the degree of
interactions between them. This condition likely will further raise the defect-removal cost, as each
defect affects a larger number of subsystems. This condition will require innovative solutions in
the following areas:
• Understanding the dynamics of defect introduction and removalthe phases in which de-
fects are introduced, the phases in which they are detected, and the cost of removing defects
relative to the phase lag between introduction and detectionis paramount to accurately es-
timating the rework cost in terms of total software cost.
• The dominance of rework cost resulting from requirements and architectural design defects
clearly suggests a strong need for improved techniques to prevent and detect such defects.
The System Architecture Virtual Integration (SAVI) initiative is a multiyear, multimillion dollar
program focused on developing the capability to virtually integrate systems before designs are
committed to hardware, as a means of managing the exponentially increasing complexity of mod-
ern aerospace systems. Its objective is to discover system-level errors—typically requirements
and design errors—through virtual integration that occurs earlier in the development lifecycle.
To evaluate the economic effects of the SAVI engineering practice, we compared the relative cost
advantage of two development paradigms, one following the development practices in place today
(“as is”) and another deploying SAVI technology (“to be”). If all else remains the same, the dif-
ference between the two approaches is the efficiency in managing complexity and correcting de-
fects. From our ROI (or rate-of-return) analysis, we computed the net present value (NPV, also
called net present worth).
ROI is a measure of the monetary value generated by an investment or of the monetary loss
caused by an investment. It measures the cash flow or income stream from the investment to the
investor and denotes the ratio of money gained or lost (realized or unrealized) on an investment
relative to the amount of money invested. In our case, SAVI members expected to invest $86M
over multiple years into the maturation and transition of SAVI technology into practice, and the
ROI is an indication of rate of return due to cost reduction.
NPV is commonly used for appraising long-term projects by deriving the time value of money
and considering cash flows over time. Outgoing cash flows include start-up costs, initial invest-
ments, and operational costs; incoming cash flow implies positive cash flow from the investment,
which, in the case of SAVI, is based largely on cost avoidance. Computing NPV indicates how
much value an investment or project adds to the organization. A positive NPV implies that the in-
vestment would add value to the organization; a negative NPV implies that the investment would
subtract value. In our case, NPV represents cost savings minus expenses in U.S. dollars for a pro-
ject running from 2010-2018 using SAVI technology.
In Section 2, we present the ROI analysis in terms of a rework cost-avoidance formula. The sec-
tions that follow that analysis elaborate on the contributing elements of the ROI formula:
• In Section 3, we discuss the exponential growth of avionics software systems in terms of
SLOC by analyzing the historical data to correlate major cost drivers to system size.
We based our analysis to determine ROI and NPV for a typical SAVI deployment on the invest-
ment (an estimated $86M) by SAVI member companies in the multi-year SAVI initiative to ma-
ture and transition the new technology and development paradigm to member company product
groups. We also considered the cost of implementing SAVI in a project to be the same as the cost
for current methods, once a team has been trained in the SAVI practicea cost covered in the in-
vestment figure. 1
The ROI analysis is based on avoiding rework cost by detecting defects that are currently detected
post-unit test with a high-rework cost earlier in the development process through use of SAVI.
The SAVI approach aims to reduce requirements and design defects through up-front modeling
and validation, preventing these defects from flowing down to later phases where they would
cause significant rework efforts and thus cost more to fix.
Our estimates of the possible savings from rework avoidance include several observations about
factors that increase costs:
• Rework cost is primarily driven by failures in integration [Lutz 1993]. 2
• More than 70% of all defects can be traced back to defects introduced in pre-code develop-
ment phases (requirements and design) with nominal rework cost of two orders of magnitude
greater than the cost of removing defects before coding [Dabney 2003].
• Rework constitutes a significant portion of the total system-development cost with require-
ments-related rework making up 79% of the total rework cost [Dabney 2003]. Thus, increas-
ing defect detection and removal efficiency lowers the rework cost.
________________________________________________________________________________
1
Implementation costs in a “to-be” SAVI project consist of creating, evolving, and analyzing practice models of
the system. Those activities replace current document-based methods for specifying system requirements, re-
quests for bids, and design documentation. In the context of this study, we assume the cost for applying SAVI to
be the same as the cost for current methods.
2 In this study, 387 software defects discovered during the integration and test phase of the Voyager and Galileo
spacecraft were analyzed (the same software was used to control both spacecraft). Lutz found that 98% of the
faults were attributed to “functional faults” (operating faults and conditional faults resulting from incorrect condi-
tion or limit values), behavioral faults (i.e., a system is not conforming to requirements), and “interface faults”
(related to interaction with other systems’ components). Only 2% of the faults were coding faults internal to a
module. The functional and interface faults were direct consequences either of errors in understanding and im-
plementing requirements or of inadequate communication among development teams. Safety-related errors
accounted for 48% of the errors discovered in Voyager and 56% discovered in Galileo; 36% of the errors in
Voyager and 19% of the errors in Galileo were related to interface faults. Inter-team communication errors (as
opposed to intra-team) were the leading cause of interface faults (93% for Voyager and 73% for Galileo). One
primary cause of safety-related interface faults was misunderstood hardware-interface specifications (67% for
Voyager and 48% for Galileo). Errors in recognizing and understanding the requirements were a significant
cause of functional faults (62% for Voyager and 79% for Galileo).
The rework cost percentage represents the total rework cost as a percentage of the total system-
development cost. The rework cost percentage for requirements errors represents the percentage
of rework cost attributable to requirements errors in terms of total rework cost. The removal-effi-
ciency percentage for requirements errors represents the percentage of requirements errors that are
discovered and removed during the requirements phase due to SAVI early detection instead of in
a later phase.
• We investigated the effects of changing the percentage of rework cost. Assuming all else re-
mains the same, we considered rework to be a conservative 30% of total cost and a nominal
50% of total cost for 2010, in agreement with SAVI members.
• We computed the rework cost for requirements errors as a percentage of the total rework
cost. We used an estimated number of defects that would typically be introduced and discov-
ered in different development phases for a system of a defined size and complexity. Empiri-
cal data derived from case studies [RTI 2002, Dabney 2003] and recent experiences of the
companies participating in the SAVI initiative corroborated our estimates.
• We determined the effectiveness of defect-removal efficiency in SAVI-based development
compared to current system-development practices, based on the ratios of defects being in-
troduced in the different phases of the system-development lifecycle and the likelihood of
detecting a defect in a certain phase. On the basis of Miller’s fault taxonomy [Miller 1995]
and fault distributions derived from current system-development practices (i.e., as is) [Hayes
2003], we applied conservative assumptions about the effectiveness of SAVI deployment in
reducing certain classes of faults. Specifically, for each fault class, experts from SAVI-
member companies assigned a 0, 50, or 100% probability value for the impact of the SAVI
approach on fault-removal efficiency and improved early fault detection. 3 This evaluation
resulted in a defect-removal efficiency of 66% for requirement defects. We also added a
skeptical scenario in which we reduced the defect-removal efficiency by a factor of 0.5 to
33%.
________________________________________________________________________________
3
These estimates were made based on the proof-of-concept demonstration experience and member company
experiences with analytical model-based technologies.
Fairley and Willshire classified rework into three categories [Fairley 2005]:
1. Evolutionary rework, which is caused by external factors, including changing requirements,
design constraints, and environmental factors. This rework is unavoidable given the unfore-
seeable nature of external factors.
2. Retrospective rework, which is conducted to improve structure, functionality, behavior, or
quality attributes of previous versions to accommodate the needs of the current version.
3. Corrective rework, which is aimed at fixing defects discovered in current and previous ver-
sions.
In our rework cost-avoidance and ROI calculations, we only take into account corrective rework
before initial delivery. SAVI practices will have cost-savings effects beyond directly reducing the
rework cost. An example of these effects is avoiding the programmatic costs of delays in system
delivery and reduction in continuing sustainment costs. Furthermore, retrospective and evolution-
ary work may experience cost reduction due to SAVI. The ROI, PV, and NPV calculations do not
reflect such additional cost savings.
Software was first used in commercial aircraft in 1968 when Litton LTN-52 Inertial Navigation
Systems entered service on the Boeing 707 [Potocki de Montalk 1991]. Software has since grown
to become more important for various services in an aircraft; correspondingly, software has in-
creased in size and complexity.
To calculate the SLOC LineFit points that are plotted in Figure 1, we computed the interception
point at which a line intersects the y-axis by using existing x-values and y-values. The interception
point is based on a best-fit regression line plotted through the known x-values and known y-val-
ues. It uses the slope of the linear regression line through data points for known values of SLOCs
using ln(SLOC) (denoted y in formula) and known years (denoted x in formula). The slope is the
vertical distance divided by the horizontal distance between any two points on the line, which is
the rate of change along the regression line.
∑(𝑥𝑥−𝑥𝑥)(𝑦𝑦−𝑦𝑦)
Formally, we have 𝑎𝑎 = 𝑦𝑦 − 𝑏𝑏𝑥𝑥, 𝑏𝑏 = ∑(𝑥𝑥−𝑥𝑥)2
, where x and y represent the averages of
known data. The ln(LineFit) provides the exponential growth, which we used to project the num-
ber of SLOCs, based on trends in existing aerospace software. The SLOC LineFit is computed as
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝐿𝑖𝑖𝑖𝑖 = 𝑒𝑒 (+(𝑏𝑏∗𝑌𝑌𝑌𝑌𝑌𝑌𝑌𝑌)+𝑎𝑎) . When we compared the values of SLOC and SLOC LineFit, we
saw that small code sizes grew faster, but the projections held well for 1985–1993. Therefore,
when computing the slope (b), we used the interval 1985–1993, for which we had public data.
We estimated the system-development cost under the “as-is” process by first using COCOMO II
to estimate the software-development cost and then extrapolating the total system-development
cost based on a realistic multiplication factor that the SAVI members agreed with. This factor of
1.55corresponding to software representing 66% of total system-development costwas con-
sidered nominal for the 2010 time frame. This figure was further confirmed in a follow-on SAVI
ROI study by explicitly estimating the system cost through the use of SEER [Ward 2011, SAVI
2015a, SAVI 2015b].
COCOMO II has been designed with the following capabilities [Boehm 2000, p. 3]:
1. Provide accurate cost and schedule estimates for both current and likely future software pro-
jects.
2. Enable organizations to easily recalibrate, tailor, or extend COCOMO II to better fit their
unique situations.
3. Provide easy-to-understand definitions of the model’s inputs, outputs, and assumptions.
4. Provide a constructive, normative, and evolving model.
COCOMO II represents the evolution of COCOMO 81 [Boehm 1981], which it replaces. The
COCOMO family of tools has enjoyed wide adoption in the software industry and has been suc-
cessfully tailored to specific domains within that industry.
COCOMO II uses a measure of SLOC that represents the size of the system to be developed. The
measure of SLOC in COCOMO II follows the guidelines developed by the Software Metrics Def-
inition Group [Park 1992].
________________________________________________________________________________
4
Computed Consumer Price Index average from 1991 to 2008 [U.S. Department of Labor 2011]
Following COCOMO II guidance, we set attribute ratings conservatively toward the nominal
level. Thus, if a level fell between high and very high, we set it at high. The following attributes
were assigned ratings with the effort factor shown in parentheses, and the rating assignments are
summarized in Table 2. The attributes are as follows:
• Aerospace development is conducted by internationally distributed teams that engage in mul-
tisite development (SITE).
• Aerospace software is safety critical; as a result, it requires a high degree of reliability
(RELY).
• Aerospace software is embedded, operates under stringent processor and memory-resource
constraints, and requires efficient utilization of processing hardware (TIME) and memory
(STOR).
• Aerospace software requires more documentation than conventional software (DOCU).
• Aerospace software is more complex than conventional software (CPLX).
The multisite-development (SITE) attribute indicates the degree of site collocation (from fully
collocated to international distribution) and communication support (from surface mail and some
phone access to full interactive multimedia). We chose the level
• very low (1.22): international
The required-software-reliability (RELY) attribute denotes the extent to which the software must
perform its intended function over a period of time. We chose three criticality levels to reflect less
critical, highly critical, and safety-critical subsystems:
• nominal (1.00): moderate, easily recoverable losses
• high (1.10): high financial loss
• very high (1.26): risk to human life
The execution-time-constraint (TIME) attribute indicates the expected use of processing capacity.
We chose the level
• high (1.11): 70% use of available execution time
The main-storage-constraint (STOR) attribute represents the degree of main storage constraint im-
posed on a software system or subsystem. We chose the degree
• high (1.05): 70% use of available storage
Regarding the attribute that matches documentation needs to lifecycle needs (DOCU), developing
for reusability imposes constraints on a project's RELY and DOCU ratings. The RELY rating
The product-complexity (CPLX) rating is the subjective weighted average of the complexity rat-
ing with respect to control operations, computational operations, device-dependent operations,
data-management operations, and user-interface management operations. Aerospace software
ranks extra high for control operations and device-dependent operations due to the hard-real-time-
control way resources and devices are managed. For computational operations, database opera-
tions, and user interfaces (less critical and highly critical subsystems), we considered the com-
plexity to be at least high. Few software systems exhibit the complexity of aerospace software;
therefore, we chose the ranking of very high for safety-critical subsystems. We chose levels
• high (1.17)
• very high (1.34)
Because large aerospace software consists of many subsystems with different requirements and
criticalities, we decomposed the system into a number of modules in COCOMO II and set differ-
ent switch levels for each subsystem, resulting in EAF changes.
Table 2 summarizes the switch settings in COCOMO II that focus on system-specific aspects.
Table 2: Summary of Switch Settings in COCOMO II for the SAVI Cost Model
Safety Critical Highly Critical Less Critical
STOR High
TIME High
DOCU High
COCOMO II has an additional set of parameters that characterizes qualities more specific to or-
ganizations, as they are related to staff expertise, organization maturity, development environ-
ment, and the like. These parameters, which we set to the nominal value, are
• database size (DATA)
• platform volatility (PVOL), referring to hardware and operating systems
• use of software tools (TOOL)
________________________________________________________________________________
5
RUSE captures the effort needed to make software components intended for reuse.
Table 3: Size of Subsystems with Respect to Criticality and Size of Reused Code Base in MSLOC
Code Reuse
MSLOC Criticality
Base 30% 40% 50% 60% 70%
COCOMO II has additional switches that affect how the cost estimation reflects the amount of
code reuse. We considered software understanding (SU) to be higher than a nominal value (equal
to 20), which implies that the system has good structure (high cohesion, low coupling), good ap-
plication clarity (good correlation between program and application code), and a high degree of
self-descriptiveness (good code commentary, good and useful documentation overall). The degree
of unfamiliarity of the software (UNFM) is 0.2, indicating that the software is mostly familiar and
is lower than the nominal value of 0.4. We set the remaining parameters to nominal values. As a
result, the computed adjustment factor is 20.
Table 4: Estimated Software-Development Cost in Millions of US$, Given MSLOC and Amount of Re-
use Using the “As-Is” Process
MSLOC Reuse Nominala Lower Bounda Upper Bounda
On the basis of historical data from previous projects in the industry, we applied 1.55 as a multi-
plier to the software-development cost to derive the total system-development cost. In a follow-on
ROI study [SAVI 2015b], the commercial tool SEER was used to build a SEER-SEM and SEER-
H model of a Boeing 777-200 to explicitly estimate the cost of the non-software portion of the
system and compare it to both publicly available data and the estimates of the original SAVI ROI
study presented in this report. The follow-on study confirms that the cost multiplier of 1.55 was
acceptable for the 2010 time frame. Unfortunately, the software count increases while the physical
parts count remains stable, resulting in software increasing from 66% in 2010 to 88% by 2024 of
the total system-development cost. By 2024, then, the cost multiplier will be 1.12.
If we plot the cost across different reuse percentages, we can see that the software-development
cost grows linearly with a decrease in reuse percentage. Figure 2 illustrates this cost for the three
system sizes’ nominal and lower bound values.
________________________________________________________________________________
6
COCOMO computes the cost at full granularity, and we use the exact values for all calculations. However, for
readability purposes, we present the cost estimates rounded off to millions of US$.
$35,000
$15,000 30 MSLOC - LB
27 MSLOC - Nom
$10,000
27 MSLOC - LB
$5,000
$0
70% 60% 50% 40% 30%
Table 5 shows the total system-development cost. We used these numbers throughout the remain-
der of the ROI analysis. Again, we present the nominal, lower bound, and upper bound estimates.
The rework fraction of total software-development work can be as high as 60% to 80% for very
large projects [Basili 1994, 2001; Jones 1996; Cross 2002]. To determine the rework cost as a per-
centage of total system-development cost, we used 50% as an approximation for software-devel-
opment cost, drawn from software being 66% of system-development cost and rework being
around 70% of software-development cost. We also chose 30% as a more conservative number.
Researchers have carried out a number of studies to determine where defects are introduced in the
development lifecycle, when these defects are discovered, and the resulting rework cost. We lim-
ited ourselves here to work previously performed by the National Institute of Standards and Tech-
nology (NIST), Galin, Boehm, and Dabney [Boehm 1981, RTI 2002, Dabney 2003, Galin 2004].
The NIST data primarily focuses on IT applications, while the other studies draw on data from
safety-critical systems. Findings for the percentages of defects introduced and discovered were
quite consistent across these studies, with the exception that higher leakage rates into operation
are acceptable in IT systems.
Table 6 shows the percentages of defect introduction and discovery that we used for this ROI
study. The Row Total column shows the percentages of defects introduced in each development
phase. Each row shows the distribution of each percentage across phases, and all of the entries in
a row add up to the row total. For example, 35.25% of all defects are requirements-related defects,
while 16.5% of all defects are requirements errors that are detected during testing.
The percentages reflect the lower defect-leakage rates of 2.5% into operation for safety-critical
systems. Since the rework cost estimates derived from the COCOMO II model include only the
cost of rework through integration, we needed to take only those percentages into account. We
normalized the defect percentages so that they add up to 100%.
The same studies by NIST, Galin, Boehm, and Dabney provide rework cost factors [Boehm 1981,
RTI 2002, Dabney 2003, Galin 2004]. For the purpose of our study, we used Dabney’s data from
a study analyzing ROI and defects for the NASA IV&V facility. This domain and its applications
have characteristics that are closer to the avionics domain than those of the other sources dis-
cussed. This data was further corroborated by recent experiences of the SAVI participant compa-
nies.
Table 7 lists the rework cost factors for these studies with [Boehm 1981] and [Galin 2004], shown
in the same column due to their similarity. The studies used different phase breakdowns, which
we indicate with asterisks in the table. The table shows that, according to Dabney’s data, it costs
130 times more to detect and remove a requirements fault at integration than in the requirements
phase. According to Dabney, for a fault introduced in coding and removed at the time of integra-
tion, the corresponding escalation in cost is 13 times more. SAVI focuses on reducing faults at-
tributed to requirements; for our purposes, we applied the multipliers in Column 3 of Table 7.
Table 7: Defect-Removal Cost, Given the Phase of Origin
[Dabney 2003]
[Dabney 2003]
[Dabney 2003]
[Dabney 2003]
[Boehm 1981,
[Boehm 1981,
[Boehm 1981,
[Boehm 1981,
[Boehm 1981,
Phase
Galin 2004]
Galin 2004]
Galin 2004]
Galin 2004]
Galin 2004]
[RTI 2002]
[RTI 2002]
[RTI 2002]
[RTI 2002]
[RTI 2002]
Requirements 1 1 1
Design 1 2.5 5 1 1 1
Unit Coding 5 6.5 10 5 2.5 2 1 * 1 1
Unit Test 10 * 50 10 * 10 10 * 5 * 1
Integration 10 16 130 10 6.4 26 10 * 13 1 2.5 3 * 1 1
System/
Acceptance 15 40 * 15 16 * 20 * 10 6.2 * * 2.5 *
Test
We multiplied the defect percentages from Table 6 by the Dabney rework cost factors shown in
Table 7. The resulting numbers represent nominal rework costs and are shown in Table 8. We
We analyzed the types of faults (defects) in Hayes’s taxonomy and evaluated how much SAVI
will be able to improve a development team’s ability to detect and remove a fault early and thus
prevent it from flowing downstream. Table 9 shows the anticipated effects of this process. The
second column denotes the fraction of faults for each fault class out of the total set of faults based
on Hayes’s taxonomy [Hayes 2003]. Hayes’s data does not suggest distributions of subfaults. In
the absence of empirical data and for the purpose of our analysis, we assume that subfaults are
uniformly distributed within their major fault class. For example, we can trace 32.9% of faults to
Major Fault Class 1.2, which consists of three subfault classes, where each subclass contains one-
third of the faults of the major fault class or 11% (see Column 5) of all faults in that fault class.
Industry members of the SAVI project discussed and estimated the reduction of faults by detec-
tion earlier in the development lifecycle 7 using both SAVI and insights gained from the proof-of-
concept demonstration project, which has shown the feasibility of detecting different types of de-
fects earlier in the lifecycle. For each subfault class, the potential effects include the following
with the chosen multiplier shown in column 4 of Table 10:
________________________________________________________________________________
7
The number of faults being introduced will probably not change. However, the primary goal is to detect and re-
move faults early in the development lifecycle.
1.1 Incompleteness 1.1.1 Incomplete Failure to adequately decompose a more abstract specifica-
Decomposition tion
1.2.3 Missing Failure to specify the initial system state, when that state is
Description of Initial not equal to 0
System State
1.3 Incorrect 1.3.1 Incorrect External Specification of an incorrect value or variable in a require-
Constants ment
1.3.3 Incorrect Failure to specify the initial system state when that state is
Description of Initial not equal to 0
System State
1.9 [Reserved for Requirement that is specified but difficult to achieve (The re-
Future] quirements statement or functional description cannot be
true in the reasonable lifetime of the product.)
1.12 Intentional Requirement that is specified at [a] higher level but inten-
Deviation tionally deviated at lower level from specifications
Table 10: Expected Removal Efficiency of Faults and Defects When Deploying SAVI
1.1.1 No impact 0 0 0
1.1 Incompleteness 0.209
1.1.2 No impact 0 0 0
1.2 Omitted or Missing 0.329 1.2.2 SAVI likely to prevent all 1 0.11 0.055
1.8 Not Traceable 0.014 SAVI likely to prevent all 1 0.01 0.005
1.11 Misplaced 0.007 SAVI likely to prevent some 0.5 0.01 0.005
1.12 Intentional Deviation 0.007 SAVI likely to prevent all 1 0.01 0.005
$5,000
Cost Avoidance (MUS$)
Figure 3. As expected, the plot shows linear growth in cost savings due to the linear growth in to-
tal system-development cost, as shown in Figure 2 (page 14).
Table 11: Avoided Cost as a Function of Rework and Software Reuse
$5,000
Cost Avoidance (MUS$)
Figure 3: Rework Cost-Avoidance as a Function of Reuse for Three Project Sizes with 30% and 50%
Rework
The investment under consideration is the cost of maturing and transitioning SAVI into existing
practice by SAVI-member companies during the multi-year SAVI initiative. This investment has
been estimated to be $86M.
𝑉𝑉𝑓𝑓 −𝑉𝑉𝑖𝑖
The arithmetic and logarithmic return on investment (ROI) is calculated as 𝑅𝑅𝑅𝑅𝑅𝑅𝑎𝑎 = and
𝑉𝑉𝑖𝑖
𝑉𝑉 𝑓𝑓
𝑅𝑅𝑅𝑅𝑅𝑅𝑙𝑙 = ln( 𝑉𝑉 ), where Vf and Vi represent the value and the investment, respectively. We com-
𝑖𝑖
puted the ROIa and ROII, where Vi equals $86 million, using the nominal values of cost avoidance
for the various scenarios in Error! Reference source not found..
Net present value (NPV), which is also known as net present worth, is useful when appraising a
𝑅𝑅
long-term project. It is computed as 𝑁𝑁𝑁𝑁𝑁𝑁 = 𝑡𝑡 𝑡𝑡 , where t denotes the time of the cash flow, i is
(1+𝑖𝑖)
the rate of return that could be earned on an investment in the financial markets with similar risk,
and Rt is the net cash flow (i.e., the amount of cash, inflow minus outflow) at time t. Inflow is
computed as the cost avoided at time t (i.e., we compute how the total cost avoided is distributed
over time).
Based on a project starting in 2010 (t = 0) and finishing in 2018 (t = 8), in which software starts to
be developed in 2014, we aligned the development phases and computed the percentage of rework
cost avoided in each phase as follows:
• Requirements, 2014, 0.04% (computed as 0.010/23.173)
• Design, 2015, 0.27%
• Implementation, 2016, 2.37%
• Test, 2017, 35.60%
• Integration, 2018, 61.71%
The values for NPV and present value (PV) are shown in Table 12 for a defect-removal efficiency
of 33% and in Table 13 for a defect-removal efficiency of 66%. The tables present the data for all
three system sizes, five reuse percentages, and two rework percentages used in previous calcula-
tions. The columns show the arithmetic ROI, the logarithmic ROI, NPV, and PV for the years
2010 through 2018.
% Rework
MSLOC
% Reuse
PV
ROIa
NPV
ROIl
2010 2011 2012 2013 2014 2015 2016 2017 2018
% Rework
% Reuse
MSLOC PV
ROIa
NPV
ROIl
2010 2011 2012 2013 2014 2015 2016 2017 2018
In Figure 4, we examine the trend for Arithmetic ROI (Figure 4), logarithmic ROI (Figure 5), and
NPV (Figure 6) as a function of reuse. As expected, the plots show linear progression in all three
cases.
Figure 4: Projected Arithmetic ROI as a Function of Reuse for Three Project Sizes with 30% and 50%
Rework
Logarithmic ROI
5.00
60MSLOC - 50%
4.00
60MSLOC - 30%
3.00
30MSLOC - 50%
2.00
30MSLOC - 30%
1.00 27MSLOC - 50%
0.00 27MSLOC - 30%
70% 60% 50% 40% 30%
Figure 5: Projected Logarithmic ROI as a Function of Reuse for Three Project Sizes with 30% and
50% Rework
Figure 6: Computed NPV as a Function of Reuse for Three Project Sizes with 30% and 50% Rework
Achieving cost efficiency and high levels of reuse requires that developers design a system for
targeted levels of reuse and avoid costly alterations and modifications. COCOMO II supports re-
use in two ways by providing: (1) a proactive design-for-reuse switch and (2) a specification of
the complexity of achieving reuse when invoking a system into a new system (e.g., percentage of
code, design, integration modified, software understanding, and automatic-translation capability).
Both approaches increase the EAF and, thus, the cost. We used the second approach with nominal
values for reuse. We did not set a specific design-for-reuse switch, since different organizations
adopt different strategies for this purpose. Reuse directly affects development cost as well as any
development gain. Computed gains should be adjusted according to any increase or decrease in
development cost.
The COCOMO II data used in this report focuses on the system-development costa cost that be-
gins with the design of the system and continues until its first delivery and deployment. Thus, our
data does not address ongoing lifecycle costs such as maintenance. For long-lived systems (those
exceeding 10 years of use) and systems experiencing frequent changes, the sustainment cost is
significant. For example, consider the lifecycle costs for a computer system for which the hard-
ware has become obsolete and the software needs to be revised and recertified on new target hard-
ware. The system may also be subject to changes in federal regulations that require incorporating
new software functionality. A model-driven approach as outlined by SAVI will undoubtedly af-
fect system-maintenance cost in a positive way, but it is unclear how much of an effect this ap-
proach will have.
There are alternatives to COCOMO II for system cost estimation. The REVised Intermediate
COCOMO (REVIC) is a cost-estimation tool developed by R. Kile and the U.S. Air Force Cost
Analysis Agency that is specific to DoD needs [DoD 1995]. It has been calibrated using com-
pleted DoD projects (development phase only). REVIC takes the same inputs as COCOMO II. On
average, the values predicted by the effort and schedule equations in REVIC are higher than those
predicted in COCOMO II. We did not use REVIC as a foundation for this study for the following
reasons:
• COCOMO II is a newer model than REVIC, which was developed and released in the early
1990s.
A second alternative, the Constructive Systems Engineering Cost Model (COSYSMO), based on
COCOMO II, can help planners reason about the economic aspects and consequences of systems
engineering on projects [COSYSMO 2011]. For the purpose of our application, COCOMO II was
deemed sufficient.
Cost estimation using function-point analysis, created by Albrecht [Albrecht 1979], is another via-
ble technique with significant applications in industry [Garmus 2001, Jones 2007]. Albrecht’s ob-
jective was to create a metric for software productivity and quality
• in any known programming language
• in any combination of languages
• across all classes of software
Function-point analysis is intended for use in discussions with clients, contracts, large-scale statis-
tical analysis, and value analysis. SLOC metrics are intrinsically language dependent. For exam-
ple, considering software productivity should take context into account: Does the software use a
low-level language or a high-level language, and can a given high-level language be further clas-
sified into a procedural, logic-based, or object-oriented language? A function point consists of the
weighted totals of five external aspects of software applications: types of inputs to the application,
outputs that leave the application, inquiries that users can make, logical files that an application
maintains, and interfaces to other applications.
Given the data available to us, we can apply function-point analysis in our context. The analysis
would be driven by function points as the primary metric of the complexity and size of a system-
development effort. There are guidelines and data from projects to aid in estimating the number of
function points. Furthermore, we can use conversion factors to convert between SLOC and the
number of function points. We conducted a subset of our calculations here using Capers Jones’s
data, including some based on defects per function point from industrial projects [Jones 2007].
From those calculations, we observed that the computed cost reduction and ROI numbers are
slightly higher than those derived from COCOMO II. In the interest of space, we do not include
those calculations in this document.
Our COCOMO II analysis is agnostic to the organizational structure and business drivers of the
main contractor, suppliers, and subcontractors. It reports the numbers for the entire project but
does not elaborate on the benefits to the various suppliers and subcontractors. While it is easy to
see that the SAVI approach benefits contractors at all levels, it would be worthwhile to conduct a
refined ROI analysis from the perspective of the entire acquisition and development lifecycle,
________________________________________________________________________________
8
Commercial avionics systems have safety-critical requirements. In that respect, they present the same chal-
lenges as many weapon systems acquired by the DoD.
Software in avionics continues to grow in complexity and size, evidenced by the number of sub-
systems, their dependencies, and increased functionality. Current state-of-practice methods, pro-
cesses, and techniques do not scale well, causing projects to overrun schedules and budgets, pri-
marily due to the scale and complexity of systems. The cost of developing large-scale software of
the projected sizethat will also satisfy high safety-criticality requirements as well as be reusa-
bleis indeed truly large, especially when considering that the total system development cost of a
large airframer (aerospace manufacturer) is in the range of $10 to $20 billion
Thus, software is projected to become the dominant cost of developing a new product.
The ROI analysis discussed in this report clearly indicates that an organization could realize sig-
nificant financial gains by using the SAVI approach.
The predicted returns were considered higher than anticipated. This led to several follow-on activ-
ities. First, one of the SAVI-system-integrator members obtained an independent assessment by
its organization’s cost estimating group, which agreed with the findings of this report. Second,
In the second study, a Monte Carlo algorithm was used to drive the COCOMO II cost estimation,
resulting in a reduced variation of results. In addition, the commercial tool SEER was used to
build a SEER-SEM and SEER-H model of a Boeing 777-200 to explicitly estimate the cost of the
non-software portion of the system and compare it to both publically available data and the esti-
mates of the original SAVI ROI study presented here. The second study confirms that the cost
multiplier of 1.55 was acceptable for 2010. Unfortunately, the software count increases while the
physical parts count remains stable, resulting in a software increase from 66% in 2010 to 88% of
the total system-development cost by 2024.
The second study also considered tailoring the ROI analysis to reflect a subcontractor. This in-
cludes adjusting the scaling factor used to estimate the total cost relative to the software cost, the
degree of software reuse, the overall investment in the SAVI technology, and personnel cost fac-
tors.
[Albrecht 1979]
Albrecht, A. J. Measuring Application Development Productivity. Pages 83–92. Proceedings of
the Joint IBM/SHARE/GUIDE Application Development Symposium. Monterey, California. Octo-
ber 1979.
[Basili 1994]
Basili, V. & Green, S. Software Process Evolution at SEL. IEEE Software. Volume 11. Number 4.
July/August 1994. Pages 58–66.
[Basili 2001]
Basili, V. et al. Building an Experience Base for Software Engineering: A Report on the First
eWorkshop. CeBASE. 2001. http://www.cs.umd.edu/~basili/presentations/profes01.pdf
[Boehm 1981]
Boehm, B. W. Software Engineering Economics. Prentice Hall. 1981.
[Boehm 2000]
Boehm, B. W. et al. Software Cost Estimation with COCOMO II. Prentice Hall. 2000.
[COCOMO II]
COCOMO II. Center for Systems and Software Engineering Website. March 14, 2018 [accessed]
http://sunset.usc.edu/csse/research/COCOMOII/cocomo_main.html
[COSYSMO 2011]
Constructive Systems Engineering Cost Model (COSYSMO). Massachusetts Institute of Technol-
ogy COSYSMO Website. 2011. March 14, 2018 [accessed] http://cosysmo.mit.edu
[Cross 2002]
Cross, S. E. Message from the Director. In The Software Engineering Institute 2002 Annual Re-
port. Software Engineering Institute, Carnegie Mellon University. 2002. https://re-
sources.sei.cmu.edu/library/asset-view.cfm?assetID=30178
[Dabney 2003]
Dabney, J. B. Return on Investment of Independent Verification and Validation Study Preliminary
Phase 2B Report. NASA. 2003.
[DoD 1995]
Department of Defense. The Parametric Cost Estimating Handbook. Joint Government/Industry
Initiative. 1995.
[Feiler 2009]
Feiler, P. H. et al. System Architecture Virtual Integration: An Industrial Case Study. CMU/SEI-
2009-TR-017. Software Engineering Institute, Carnegie Mellon University. 2009. https://re-
sources.sei.cmu.edu/library/asset-view.cfm?assetid=9145
[Feiler 2010]
Feiler, P. et al. System Architecture Virtual Integration: A Case Study. Embedded Real Time Soft-
ware and Systems Conference (ERTS2010). Toulouse, France. May 2010.
http://web1.see.asso.fr/erts2010/Site/0ANDGY78
/Fichier/PAPIERS%20ERTS%202010%202/ERTS2010_0105_final.pdf
[Galin 2004]
Galin, D. Software Quality Assurance: From Theory to Implementation. Pearson/Addison-Wes-
ley. 2004.
[Garmus 2001]
Garmus, D. & Herron, D. Function Point Analysis: Measurement Practices for Successful Soft-
ware Projects (Information Technology Series). Addison-Wesley. 2001.
[Hatton 2005]
Hatton, L. Estimating Source Lines of Code from Object Code: Windows and Embedded Control
Systems. University of Kingston. 2005.
[Hayes 2003]
Hayes, J. H. Building a Requirement Fault Taxonomy: Experiences from a NASA Verification
and Validation Research Project. Pages 49–59. 14th International Symposium on Software Relia-
bility Engineering (ISSRE). Denver, Colorado. November 2003.
[Jones 1996]
Jones, C. Applied Software Measurement: Assuring Productivity and Quality. McGraw-Hill.
1996.
[Jones 2007]
Jones, C. Estimating Software Costs: Bringing Realism to Estimating. McGraw-Hill. 2007.
[Lutz 1993]
Lutz, R. R. Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems.
Pages 126-133. In Proceedings of the IEEE International Symposium on Requirements Engineer-
ing. San Diego, California. January 4-6, 1993. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&ar-
number=324825
[Miller 1995]
Mirsky, S. M. et al. Guidelines for the Verification and Validation of Expert System Software and
Conventional Software. NUREG/CR-6316. U.S. Nuclear Regulatory Commission. 1995.
[Redman 2010]
Redman, D., et al. Virtual Integration for Improved System Design: The AVSI System Architec-
ture Virtual Integration (SAVI) Program. Analytic Virtual Integration of Cyber-Physical Systems
Workshop, 31st IEEE Real-Time Systems Symposium (RTSS 2010). San Diego, California. No-
vember 30-December 3, 2010.
[RTI 2002]
RTI International. The Economic Impacts of Inadequate Infrastructure for Software Testing. Plan-
ning Report 02-3. NIST. 2002.
[SAVI 2015a]
Chilenski, J. J. & Ward, D.T. [editors] ROI Estimation. In SAVI AFE 59 Report Summary Final
Report. Aerospace Vehicle Systems Institute. Pages 29-31. 2015. http://savi.avsi.aero/wp-con-
tent/uploads/sites/2/2015/08/SAVI-AFE59-9-001_Summary_Final_Report.pdf
[SAVI 2015b]
Chilenski, J. J. & Ward, D.T. [editors]. EPoCD Use Case Demonstrations. Aerospace Vehicle
Systems Institute. Pages 9-28. 2015. http://savi.avsi.aero/wp-content/up-
loads/sites/2/2015/08/SAVI-AFE59S1-8-002_Summary_Final_Report.pdf
[Ward 2011]
Ward, D. & Helton, S. Estimating Return on Investment for SAVI (a Model-Based Virtual Inte-
gration Process). SAE International Journal of Aerospace. Volume 4. Number 2. October 18,
2011. Pages 934-943. https://saemobilus.sae.org/content/2011-01-2576
17. SECURITY CLASSIFICATION OF 18. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20. LIMITATION OF
REPORT OF THIS PAGE OF ABSTRACT ABSTRACT