CK Metrics
CK Metrics
CK Metrics
CK Metrics: Outline
Objective
CK Metrics: Objective
CK metrics were designed [1]:
CK Metrics: Objective
SW development Improvement
CK Metrics: Definition
WMC (Weighted Methods per Class)
Viewpoints
WMC is a predictor of how much TIME and EFFORT is required to develop and to maintain the class. The larger NOM the greater the impact on children. Classes with large NOM are likely to be more application specific, limiting the possibility of RE-USE and making the EFFORT expended one-shot investment.
Objective: Low
CK Metrics: Definition
DIT (Depth of Inheritance Tree)
Definition
The maximum length from the node to the root of the tree
Viewpoints
The greater values of DIT : The greater the NOM it is likely to inherit, making more COMPLEX to predict its behaviour The greater the potential RE-USE of inherited methods Small values of DIT in most of the systems classes may be an indicator that designers are forsaking RE-USABILITY for simplicity of UNDERSTANDING.
Objective: Trade-off
6
CK Metrics: Definition
NOC (Number of Children)
Definition
Viewpoints
The greater the NOC is: the greater is the RE-USE the greater is the probability of improper abstraction of the parent class, the greater the requirements of method's TESTING in that class. Small values of NOC, may be an indicator of lack of communication between different class designers.
Objective: Trade-off
CK Metrics: Definition
CBO (Coupling Between Objects)
Definition
It is a count of the number of other classes to which it is coupled
Viewpoints
Small values of CBO : Improve MODULARITY and promote ENCAPSULATION Indicates independence in the class, making easier its RE-USE Makes easier to MAINTAIN and to TEST a class.
Objective: Low
8
CK Metrics: Definition
RFC (Response for Class)
Definition
It is the number of methods of the class plus the number of methods called by any of those methods.
Viewpoints
If a large numbers of methods are invoked from a class (RFC is high): TESTING and MAINTANACE of the Class becomes more COMPLEX.
Objective: Low
9
CK Metrics: Definition
LCOM (Lack of Cohesion of Methods)
Definition
Measures the dissimilarity of methods in a class via instanced variables.
Viewpoints
Great values of LCOM: Increases COMPLEXITY Does not promotes ENCAPSULATION and implies classes should probably be split into two or more subclasses Helps to identified low-quality design
Objective: Low
10
CK Metrics: Guidelines
METRIC GOAL LEVEL COMPLEXITY (To develop, to test and to maintain) WMC DIT Low Trade-off RE-USABILITY ENCAPSULATION, MODULARITY
NOC Trade-off
11
CK Metrics: Thresholds
Thresholds of the CK metrics [2,3,4]:
CK in the Literature
CK Metrics & other Managerial performance indicators
Chidamber & Kemerer study the relation of CK metrics with [2]: Productivity
CK in the Literature
CK Metrics & Maintenance effort
Li and Henry (1993) use CK metrics (among others) to predict [5]: Maintenance effort, which is measured by the number of lines that have changed in a class during 3 years that they have collected the measurement .
14
CK in the Literature
DIT & Maintenance effort
Daly et al. (1996) in his study concludes that[5]: That subjects maintainig OO SW with three levels of inheritance depth performed maintaince tasks significantly quickier than those maintaining an equivalent OO SW with no inheritance.
15
CK in the Literature
DIT & Maintenance effort
However, Hand Harrisson (2000) used DIT metric to demonstrate [5]: That systems without inheritance are easier to understand and modify than systems with 3 or 5 levels of inheritance.
16
CK in the Literature
DIT & Maintenance effort
Poels (2001) uses DIT metric, and demonstrate [5]: The extensive use of inheritance leads to modls that are more difficult to modify.
17
CK in the Literature
DIT & Maintenance effort
18
CK in the Literature
CK Metrics & Fault-proneness prediction
Study Input: Design Complexity Metrics 1996 Basili et al. [6] 2000 Briand et al.[7] 2004 Kanmani et al.[8] 2005 Nachiappan et al.[9] 2007 Fault-prone classes Fault ratio CK Metrics among others Fault ratio Fault-prone classes Fault-prone classes Multivariate Logistic Regression Multivariate Logistic Regression General Regression Neural Network Multiple Linear Regression Multivariate Logistic Regression Output Prediction Technique
CK, QMOOD
Olague et al.[10]
CK : Chidamber & Kemerer, QMOOD: Quality Metrics for Object Oriented Design
19
Conclusion
CK metrics measure complexity of the design There are no thresholds defined for the CK metrics. However, they can be used identifying outlaying values. CK metrics (while measure from the code) have been related to: fault-proneness, productivity, rework effort, design effort and maintenance.
20
References
[1] Chidamber Shyam, Kemerer Chris, A metrics suite for object oriented design, IEEE Transactions on Software Engineering, June1994. [2] Chidamber Shyam, Kemerer Chris, Darcy David, Managerial use of Metrics for Object-Oriented Software: an Exploratory Analysis, IEEE Transactions on software Engineering, August 1998. [3] Linda Rosenberg, Applying and Interpreting Object Oriented Metrics, Software Assurance Technology Conference, Utah, 1998.
[4] Stephen H. Kan, Metrics and models in software Quality Engineering, Addison-Wesley, 2003.
[5] Genaros Marcela, Piattini Mario, Calero Coral, A Survey of Metrics for UML Class Diagrams, Journal of Object Technology, Nov.-Dec 2005.
21
References
[6] Victor R. Basili and Lionel C. Briand and Walcelio L. Melo, A Validation of ObjectOriented Design Metrics as Quality Indicators, IEEE Transactions on Software engineering, Piscataway, NJ, USA, October 1996. [7] Lionel C. Briand and J urgen W ust and John W. Daly and D. Victor Porter, Exploring the relationships between design measures and software quality in object-oriented systems Journal of Systems and Software,2000. [8] Kanmani, S., and Uthariaraj V. Rymend, Object oriented software quality prediction using general regression neural networks, SIGSOFT Soft. Eng. Notes, New York NY, USA, 2004. [9] Nachiappan Nagappan, and Williams Laurie, Early estimation of software quality using in-process testing metrics: a controlled case study , Proceedings of the third workshop on Software quality, St. Louis, Missouri, USA. (2005) [10] Hector M. Olague and Sampson Gholston and Stephen Quattlebaum, Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of ObjectOriented Classes Developed Using Highly Iterative or Agile Software Development Processes, IEEE Transactions Software Engineering, Piscataway, NJ, USA, 2007.
22