Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

International Journal of Trend in Scientific

Research and Development (IJTSRD)


International Open Access Journal
ISSN No: 2456 - 6470 | www.ijtsrd.com | Volume - 2 | Issue – 1

Novel Metrics in Software Industry

Dinesh Kumar Y Nuka Raju Kolli


Assistant Professor, Department of CSE, Associate Professor, Department of CSE,
DIET, Visakhapatnam DIET, Visakhapatnam

ABSTRACT
The role of metrics in software quality is well a component, system, or process posses a given
recognized. However, software metrics are yet to be characteristic or an attribute; an Indicator represents
standardized and integrated into development practices useful information about processes and process
across software industry. While process, project, and improvement activities that result from applying
product metrics share a common goal of contributing to metrics, thus, describing areas of improvement.
software quality and reliability, utilization of metrics
has been at minimum. This work is an effort to bring Instituting a metrics program is a challenge for many
more attention to software metrics. It examines the organizations, mostly the commitment to upfront
practices of metrics in software industry and the investment in gathering data necessary for building
experiences of some organizations that have developed, useful metrics. In addition to time, cost, and resource
promoted, and utilized variety of software metrics. As factors, developers are often reluctant to collect and
various types of metrics are being developed and used, archive project data. A commonly cited reason is the
these experiences show evidence of benefits and misuse of project data against developers and project
improvements in quality and reliability. stakeholders. Team leaders and managers play an
important role in the adoption of measurement
Keywords: Software metrics, Cost of defects, State of programs as integral part of the software engineer
metrics, Metrics in software industry. culture. They need to be convinced of (and committed
to) software measurement, and at the same time
1. Introduction promote this culture and reward their teams for it.
It is yet to be widely recognized that metrics are a The area of software measurement has been highly
valuable treasure an organization could have. They active for several decades. As a result, there are many
provide measurement about schedule, work effort, and commercial metrics available in the market. Such
product size among many other indicators. The more (affordable) metrics can be the starting point for small
they are utilized, the more effective and productive the organizations. However, much more work is needed to
organization becomes. They also provide better control standardize, validate, and integrate metrics into
over projects, and better reputation of the organization software practices. This work brings needed attention to
and its business practices. Software metrics are utilized software metrics and examines the current state of
during the entire software life cycle. Gathered data is metrics in software industry. The discussion is
analyzed and evaluated by the project managers and motivated by cost of defects and description of
software developers. The practice of metrics involves commonly used metrics.
Measures, Metrics, and Indicators. A Measure is a way
to appraise or determine by comparing to a standard or 2. Cost of Defects
unit of measurement, such as the extent, dimensions,
and capacity, as data points. The act or process of To present a convincing argument for the benefit of
measuring is referred to as Measurement. While a using metrics, one needs to highlight the incentives and
Metric is a quantitative measure of the degree to which payoff. Here we refer to an article, authored by William

@ IJTSRD | Available Online @ www.ijtsrd.com | Volume – 2 | Issue – 1 | Nov-Dec 2017 Page: 1252
International Journal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456-6470

T. Ward [1], describing Hewlett-Packard’s (HP’s) “10 x capitalize on the power of quantitative methods to
software quality improvement” initiative. The author measure their processes and activities. Based on Tom
uses data from software metrics database and an DeMarco [2] statement, “You can’t control what you
industry profit-loss model to develop a method to can't measure”, these disciplines apply measurements
compute the actual cost of software defects. The to gain better control of their projects and quality of
database is an important element of HP’s software products. Although software engineering is new and
quality activities and is a valuable source for different evolving discipline, experts have proposed quantitative
tasks such as quality status reporting, resource methods applicable to all aspects of software projects
planning, scheduling, and calculation of cost defects. with the goal of achieving high quality products. These
Sources of data include product comparisons, analysis methods are related to different activities including:
of source code size and complexity, defect logging,
project post-mortem studies, and project schedule and Cost and effort estimation: Estimation models [3] help
resource plans. The Software Quality Engineering better plan and execute software projects. One factor
Group follows definite steps to discover, correct, and that plays into the success of applying estimation
retest a defect during testing activities (integration, models is the experience of the organization to predict
system, and/or acceptance). The estimated effort here is effort and cost for new software systems. Mathematical
about 20 hours, and it represents the average effort for models, such Boehm’s COCOMO [4], Putnam’s
discovering and fixing a defect. This effort is calculated SLIM [5], and Albrecht’s Function Points [6], can be
using data points from multiple projects that were used.
tracked with the software quality database. Defect cost
Productivity measures: Productivity models focus on
can also be determined per project or phase, and cost
the human side of the project. A key factor to accurate
can be weighted based on programmer productivity or
determination of productivity is having sufficient
product code size. For instance, the following formula
information about the productivity of an individual (or
shows the cost per defect that is discovered and fixed
the team) in different scenarios, such as the type of
during the integration through the release phases of a
project, team structure, skills and backgrounds, tools,
project. and environment. Measures and metrics for assessing
Software Development Cost = SDRC + PL the human side of the project are more challenging to
develop and apply than other measures and metrics [7].
Where
Data collection: An important discipline, requiring
SDRC (Software Defect Rework Cost) is determined diligence and careful implementation. Although it has
by the amount of effort and expense required to find obvious benefits for developing measures and metrics,
and fix defects during the integration through release team members often dislike it. The common perception
phases, and PL (Profit Loss) is the revenue loss caused among some team members is that data collection leads
by lower product sales throughout the entire post to uneasy feeling of being “under pressure” and “at
release lifetime. risk” as collected data can be negatively used in
performance evaluations. The real risk here is that
To illustrate, a product has about 110 software defects inaccurate data can result in metrics that provide false
found and fixed during testing. Each defect requires 20 assessments.
engineering hours to identify and fix. The total work
effort is 2200 hours. At $75/hour, SDRC is Quality assessment: This activity covers different
measures including efficiency, reliability, flexibility,
$165,000, and the rework cost per defect is $1500. portability, usability, correctness, and many others.
These expenses can be saved had metrics been used to Standards that define quality means in terms of specific
mitigate those defects. In addition, it should be noted project goals are needed. Here and with historical data,
that the other calculation for defect cost is product objectives (in terms of measures) should be achieved or
profit-loss. Here, missed market-window opportunities exceeded to meet desired quality standards. Although
result in loss of sales, profits, and competitiveness. This quality assessment is often applied during early in the
illustrates typical losses that result from the lack of life cycle, it covers, along with “umbrella activities”,
metrics utilization. the entire life cycle [7].
3. Common Software Metrics Reliability models: Even though reliability is seen as a
quality attribute, reliability assessment models are more
Unlike software engineering, other disciplines
@ IJTSRD | Available Online @ www.ijtsrd.com | Volume – 2 | Issue – 1 | Nov-Dec 2017 Page: 1253
International Journal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456-6470

related to software failures, and are mostly applied work starts, other project metrics begin to have
during testing. The models work well when it is significance for different measures, such as production
possible to monitor and trace failures during a test or rates in terms of models created, review hours, function
operation. Many quality models use reliability as a points, and delivered source code lines. Common
factor, and the concept of reliability weights much in software project metrics include:
the perception of quality.
 Order of growth: Simple characterization of an
Other activities include: Performance evaluation for algorithm‘s efficiency allowing to compare relative
optimal solutions, Structural and complexity, performance of alternative algorithms without being
Capability maturity assessment, Management by focused on the implementation details.
metrics, and Evaluation of methods and tools. These
activities are becoming an important part of Software  Lines of code: The Physical type is a count of lines
Engineering as each activity leads to the development including comment and blank lines (not to exceed
of software metrics, which some of them evolve into the 25% of all lines of code). The logical type counts
assessment models. the number of "statements" tied to a specific
programming language.
Process, Project, and Product are three common
categories for software metrics. Below, we highlight  Cyclomatic complexity: Measures the application
the key focus on each category. complexity and describes its flow of control.

Process Metrics: These metrics focus on software  Function points: Reflects functionalities relevant to
development and maintenance. They are used to assess (and recognized) by the end user. It is independent of
people’s productivity (called private metrics), implementation technology.
productivity of the entire organization (called public
 Code coverage: Determines statements in a body of
metrics), and software process improvement. Process
code that have been executed through a test run and
assessment is achieved by measuring specific attributes
those statements that have not [8].
of the process, developing a set of metrics based on the
identified attributes, and finally using the metrics to Other project metrics include coupling, cohesion,
provide indicators that lead to the development of requirements size, application size, cost, schedule,
process improvement strategies. Private metrics are productivity, and the number of software developers.
designed to help individual team members in self-
assessment allowing an individual to track work tasks Product Metrics: These metrics focus on measuring key
and evaluate self-productivity. Pubic metrics, on the characteristics of the software product. There are many
other hand, help evaluate the organization (or a team) product metrics applicable to analysis, design, coding,
as a whole, allowing teams to track their work and and testing. Commonly used product metrics include:
evaluate performance and productivity of the process.
 Specification quality metrics: These metrics provide
A good example is team’s effectiveness in eliminating
defects through development, detecting defects through indication of the level of specificity and
testing, and improving response time for fixes. completeness of requirements.
 System size metrics: They measure the system size
Project Metrics: Project metrics are tactical and related
to project characteristics and execution. They often based on information available during the
contribute to the development of process metrics. The requirements analysis phase.
indicators derived from project metrics are utilized  Architectural metrics: These metrics provide an
by project managers and software developers to adjust assessment of the quality of the architectural design
project workflow and technical activities. The first of the system.
application of process metrics often occurs during cost  Length metrics: They measure the system size based
and effort estimation activity. Metrics collected from on lines of code during implementation phase.
past projects are used as basis from which effort and
time estimates are made for new projects. During the  Complexity metrics: They measure the complexity
project, measured efforts and expended time are of developed source code.
compared to original estimates to help track how
accurate the project estimates were. When the technical  Testing effectiveness metrics: They measure the

@ IJTSRD | Available Online @ www.ijtsrd.com | Volume – 2 | Issue – 1 | Nov-Dec 2017 Page: 1254
International Journal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456-6470

effectiveness of conducted tests and test cases.  Phase: engineering months/total engineering
months
Other product metrics focus on design features, quality
attributes, code complexity, maintainability,  End product quality metrics: For example:
performance characteristics, code testability, and
others.  Defects/KNCSS (Thousand Non-Comment
Source Statements).
4. Metrics in Software Industry  Defects/Lines of Documentation (LOD) not
included in the program source code.
Software measurement started in the early 1970s in the
US and Canada. The SEI at Carnegie Mellon  Testing effectiveness metrics: Example indicator is
University helped establish many measurement Defects/testing time.
programs giving a platform to help increase the use of
software metrics in software industry. Organizations  Testing coverage metrics: Example indicator is
such as HP, Motorola, NASA, Boeing, AT&T, and Branches covered/total branches. This indicates what
others use software metrics extensively [11]. In percentage of the decision points in the program was
Germany, since late 1980s companies like Siemens, actually executed.
Bosch, Alcatel, BMW, and others started integrating
software measurement programs into their practices. To  Useable functions metrics: Example indicator is
present snapshot of the state of metrics in software Bang, which is "a quantitative indicator of net usable
industry, examples from HP, Motorola, NASA, and functions from the user's point of view" [2]. Bang is
Boeing are presented in the section to highlight the computed in two ways: For function-strong systems,
initial steps and effort toward integrating the practice of computing Bang is counting the tokens entering and
measurement in software development. Many consider leaving the function multiplied by the weight of the
these initiatives and efforts a significant contribution function. For data-strong systems, computing Bang
toward promoting the practice of software metrics. involves counting the objects in the database
weighted by the number of relationships of which it
Hewlett-Packard (HP) is member.
HP’s experience in incorporating a software metric  Productivity metrics: Example indicator is
program has been one of the most reported initiatives in NCSS/engineering month.
the industry. Grady and Caswell [9] implemented the
program in an effort to improve software project HP’s software metrics program served as a model for
management, team productivity, and software quality. many organizations and prompted a wide interest
These goals were achieved in the short term for among organizations seeking to improve the quality of
individual development projects. Grady and Caswell their products and software development processes.
categorized metrics into primitive or computed.
NASA
Primitive metrics are those directly observed such as
total development time for the project, number of NASA implements software metrics with emphasis on
defects in unit testing, lines of code - the program size improving reliability in software requirements
and so forth. Computed metrics cannot be directly specification and source code. For complete
observed, they are mathematical aggregations of two or requirement coverage, test plans are also examined
more primitive metrics. Examples of most widely used without excessive testing and without expending
computed metrics at HP include: expenses. To improve reliability, they consider three
life cycle phases: requirements, coding, and testing.
 Metrics for project scheduling cost of defects,
Software metrics and error prevention techniques can
workload, and project control. For example: be applied throughout these phases to help improve
 Average fixed defects/working day reliability [12].
 Average engineering hours/fixed defect
Requirements Metrics: For reliability, NASA’s metric
tool (called ARM - Automated Requirements
Average reported defects/working day
Measurement) parses requirements document file line
 Defects/testing time by line searching for certain words and phrases. This
 Percent overtime: Average overtime per week tool has been used in 56 requirement documents. The

@ IJTSRD | Available Online @ www.ijtsrd.com | Volume – 2 | Issue – 1 | Nov-Dec 2017 Page: 1255
International Journal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456-6470

developed measures include [12]:  Depth In Tree (DIT)


 Lines of Text: Physical lines a measure of size.  Number Of Children (NOC)
 Directives: References to figures, tables, or notes. These metrics lack industry guidelines, and therefore,
SATC developed guidelines based on NASA’s data and
 Continuances: Phrases that follow an imperative and are made available on NASA’s SATC website
introduce the specification of requirements at a lower (http://satc.gsfc.nasa.gov/).
level, for a supplemental requirement count.
Testing Reliability Metrics: For testing, SATC
 Imperatives: Words and phrases that command that developed a simulation model for error discovery and
something must be done or provided. The number of for projecting number of remaining errors in the source
imperatives is used as base requirements count. code and when such errors will be discovered. This
model is based on the Musa Model. The Musa model,
 Options: Words that seem to give latitude in
known as the “Execution Time Model”, is used to
satisfying the specifications but can be ambiguous.
evaluate computer resources with respect to (1)
 Weak Phrases: Clauses that may cause uncertainty reduction in the number of faults in a computer
and leave room for multiple interpretations. program, (2) estimation of testing time necessary to
find and correct system errors to achieve an acceptable
 Incomplete: Statements that have TBD (To be level of errors in the code, and (3) determination of
Determined) or TBS (To Be Supplied) clauses. software reliability based on the specified program
operating cycle and mean time to fault. Effective
The ARM software does not evaluate whether the verification aims to ensure that every requirement is
requirements are correct or not, but evaluates the being tested. In order to make sure that the system has
vocabulary and the individual specification of the functionality specified, test cases are developed
statements used to state the requirements. ARM also (based on one system state) to test selected sets of
evaluates the structure of the requirements document. It functions that are based on related sets of requirements.
identifies number of requirements at each level of the Here, the requirement’s functionality is included in the
hierarchical numbering structure. This information delivered system when the test is successful.
helps indicate potential lack of structure that may Assessment of traceability of requirements to test cases
impact software reliability by increasing the difficulty is also performed, and therefore, each requirement is
to make changes. It may also indicate unsuitable levels tested at least once. Note that some requirements are
of details that may constrain software design. tested more than once since they are involved in
Design and Code Reliability Metrics: For design and multiple system states.
code reliability, NASA’s Software Assurance In addition to reliability metrics applied throughout the
Technology Center (SATC) developed a tool that lifecycle, NASA developed IV&V Metrics Data
analyzes source code for architecture features and Program to gather, verify, sort out, store, and distribute
structure and to help locate error-prone modules based software metrics data. Collected data include metrics
on source code complexity, size, and modularity. and their associated problem and product data, allowing
Although there are different complexity measurements, users to explore the correlation between metrics and the
SATC uses Cyclomatic complexity (number of software. NASA’s metrics include: McCabe Software
independent test paths). They found that combining size metrics, Line of Code metrics, Requirement metrics,
and complexity makes the most effective evaluation. Error metrics, and Halstead metrics. In an effort to
Large modules with high complexity tend to have the promote metrics utilization in the software industry,
lowest reliability. Such modules are reliability risk NASA offers project non-specific data available in its
because they are difficult to change or modify. SATC repository to the software community through the
uses the following metrics for object-oriented quality Metrics Data Program website
analysis: (http://mdp.ivv.nasa.gov/).
 Weighted Methods per Class (WMC) Boeing
 Response For a Class (RFC) Boeing’s 777 program earned the company recognition
 Coupling Between Objects (CBO) for achievements through its metrics program, among

@ IJTSRD | Available Online @ www.ijtsrd.com | Volume – 2 | Issue – 1 | Nov-Dec 2017 Page: 1256
International Journal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456-6470

other related software development initiatives [13]. The communications with suppliers that were prompted by
Boeing 777 program is one of the most software metric data were as important as the metric data itself.
intensive commercial airplanes. It has near 2.5 million In addition, development metrics were used to track
newly developed lines of on-board code. Other progress against the plans for design, code, and testing.
estimates indicate around 4 million additional lines of They included software size and number of tests.
code for customer options. The software has over one Milestones were indicated on metrics charts, associated
hundred components corresponding to physical boxes with the milestones are success criteria based on
in the airplane’s control system. Many of them were design, code, and test completion.
produced by third- party companies.
5. Conclusion
At the beginning of the program various software
measures were used by the suppliers and their The practice of software measurement is lacking behind
counterparts to present the status of the work. There and yet to mature enough to be prompted widely across
was a variety of measures that were hard to understand. software industry. Although the importance of metrics
About half way through the 777 development program, was realized since early 1970’s, it is taking a slow pace
a uniform use of software metrics was instituted. into the practice of software development.
Suppliers were asked to report simple, standard Organizations, such as HP, Motorola, NASA, Boeing,
software metrics including test definition, resource and other organizations, have been developing and
utilization, plans for software design, coding, and test applying software metrics to their projects. Although
execution. In addition, actuals were collected for their metrics and those applied by others are based on
software problem report totals. standards, some organizations tend to adapt these
metrics to their process and needs. This was clear in
Boeing’s implementation of the metrics program is NASA’s case were they detected a lack of aids to assist
defined as follows: "Each supplier was requested to in evaluating the quality of requirements or individual
prepare plans for their design, code, and test activities. specification statements.
These plans showed expected totals and the planned
completion status for each of the biweekly reporting Experiences discussed in this paper and many others
periods until the task is complete”. Following that, cases indicate when metrics are used early in the
biweekly updates that show the actual development development cycle they help detect and correct
status in terms of completed design, code, and tests are requirement faults and prevent errors later in the life
requested. Changes to the estimated total size of the cycle. Software metrics can be used at each phase of
effort are also reported along with plans to reflect new the development as illustrated in section 4. Metrics can
totals. Information from the metrics is shared with the identify potential problems that may lead to errors in
system developers for improvement purposes. The the system. Finding these potential problems decreases
overall metrics program helped Boeing to improve over all development cost and prevents side effects that
communications with supplier, adjust project plans in may result from making changes later in the
conjunction with actual progress, and keep the project development cycle. Using software metrics need not be
on schedule. Key characteristics of Boeing’s metrics time consuming effort. Measurement activities such
program, that were instrumental in supporting this daily tracking should be developed as a habit rather
process, include uniformity, frequent updates, clear than burden on developers and the organization
definition, objective measures, and re-planning, which
Effort made by the above discussed organizations (and
was very encouraged. In addition, Boeing’s effort to
many others) is a step in the right direction for metrics
define measures resulted in a 21-page set of instructions
and software development. Being the main goal of this
on how to prepare metrics data. The two critical
article, dissemination and awareness of such effort and
features of the metrics plans were re-planning when
the availability of much of such metrics to the software
needed and past data was never changed.
community (often for free) is very much needed to help
Boeing’s experience shows that metrics were forward the evolution of such initiatives and to broaden
invaluable as they helped in indicting soon enough the utilization of metrics.
where program risk points are, allowing early
corrective actions. Early on, uniform metrics
encouraged application of reasonable checks on plans
and discussion of such checks. As a result,

@ IJTSRD | Available Online @ www.ijtsrd.com | Volume – 2 | Issue – 1 | Nov-Dec 2017 Page: 1257
International Journal of Trend in Scientific Research and Development (IJTSRD) ISSN: 2456-6470

REFERENCES
1. http://findarticles.com/p/articles/mi_m0HPJ/is_n4_
v42/ai_11400873, Hewlett-Packard Journal,
October 2012
2. DeMarco, Tom. Controlling Software Projects:
Management, Measurement & Estimation, Yourdon
Press, New York, USA, 2012.
3. http://sern.ucalgary.ca/courses/seng/621/W98/joh
nsonk/cost.htm
4. http://sunset.usc.edu/research/COCOMOII/index.
html
5. http://www.qsm.com/
6. Albrecht, A. J. and J.R. Gaffney, Software function,
source lines of code, and development effort
prediction: a software science validation, IEEE
Trans. on Software Engineering, 9(6), pp 639-648,
2013.
7. Pressman, R.S. Software Engineering: A
Practitioner’s Approach. 6th Edition, McGraw Hill
Publishing Company, 2015.
8. http://www.cenqua.com/clover/doc/coverage/intro.h
tml
9. Grady, R. B. and D. R. Caswell. Software Metrics:
13 Quantitative Analysis of English Prose
Establishing a Company-Wide Program. Engle- 14
Application to Hardware wood Cliffs, N. J.: Prentice-
Hall, 2015

@ IJTSRD | Available Online @ www.ijtsrd.com | Volume – 2 | Issue – 1 | Nov-Dec 2017 Page: 1258

You might also like