Cluster Analysis in R TML
Cluster Analysis in R TML
Cluster Analysis in R TML
Cluster analysis is a method frequently used in client segmentation. It is the process of dividing
clients into well separated homogenous groups. The idea is to create customized marketing
strategies for selected segments in order to satisfy clients’ needs better. Cluster analysis can
also be used for credit scoring. It is a process of determining how likely applicants are to default
with their repayments [10]. The aim is to assess the risk of default associated with a credit
product/decision.
Usually, one credit scoring model is built for the entire client population. Sometimes the
population can be heterogeneous so it is reasonable to make segmentation of the population
and then for each segment develop separate scoring models. [1] Emphasized various factors
that can trigger a decision to build more than one scoring model: marketing, customer, data,
process and model fit factor.
Cluster Analysis in R
https://www.statmethods.net/advstats/cluster.h
tml
R has an amazing variety of functions for cluster analysis. In this section, I will
describe three of the many approaches: hierarchical agglomerative, partitioning, and
model based. While there are no best solutions for the problem of determining the
number of clusters to extract, several approaches are given below.
Data Preparation
Prior to clustering data, you may want to remove or estimate missing data and
rescale variables for comparability.
# Prepare Data
Partitioning
K-means clustering is the most popular partitioning method. It requires the analyst to
specify the number of clusters to extract. A plot of the within groups sum of squares
by number of clusters extracted can help determine the appropriate number of
clusters. The analyst looks for a bend in the plot similar to a scree test in factor
analysis. See Everitt & Hothorn (pg. 251).
# Determine number of clusters
centers=i)$withinss)
plot(1:15, wss, type="b", xlab="Number of Clusters",
aggregate(mydata,by=list(fit$cluster),FUN=mean)
Hierarchical Agglomerative
There are a wide range of hierarchical clustering approaches. I have had good luck
with Ward's method described below.
library(pvclust)
method.dist="euclidean")
plot(fit) # dendogram with p values
pvrect(fit, alpha=.95)
click to view
Model Based
Model based approaches assume a variety of data models and apply maximum
likelihood estimation and Bayes criteria to identify the most likely model and number
of clusters. Specifically, the Mclust( ) function in the mclust package selects the
optimal model according to BIC for EM initialized by hierarchical clustering for
parameterized Gaussian mixture models. (phew!). One chooses the model and
number of clusters with the largest BIC. See help(mclustModelNames) to details on
the model chosen as best.
# Model Based Clustering
library(mclust)
library(cluster)
labels=2, lines=0)
library(fpc)
plotcluster(mydata, fit$cluster)
click to view
library(fpc)
cluster.stats(d,
where d is a distance fit1$cluster, fit2$cluster)
matrix among objects, and fit1$cluster and fit$cluster are
integer vectors containing classification results from two different clusterings of the
same data