Chap8 Basic Cluster Analysis
Chap8 Basic Cluster Analysis
Inter-cluster
Intra-cluster distances are
distances are maximized
minimized
4 Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP,
Schlumberger-UP
Oil-UP
● Summarization
– Reduce the size of large
data sets
Clustering precipitation
in Australia
● Supervised classification
– Have class label information
● Simple segmentation
– Dividing students into different registration groups
alphabetically, by last name
● Results of a query
– Groupings are a result of an external specification
● Graph partitioning
– Some mutual relevance and synergy, but areas are not
identical
● Partitional Clustering
– A division data objects into non-overlapping subsets (clusters)
such that each data object is in exactly one subset
● Hierarchical clustering
– A set of nested clusters organized as a hierarchical tree
p1
p3 p4
p2
p1 p2 p3 p4
Traditional Hierarchical Clustering Traditional Dendrogram
p1
p3 p4
p2
p1 p2 p3 p4
Non-traditional Hierarchical Clustering Non-traditional Dendrogram
● Well-separated clusters
● Center-based clusters
● Contiguous clusters
● Density-based clusters
● Property or Conceptual
● Well-Separated Clusters:
– A cluster is a set of points such that any point in a cluster is
closer (or more similar) to every other point in the cluster than
to any point not in the cluster.
3 well-separated clusters
● Center-based
– A cluster is a set of objects such that an object in a cluster is
closer (more similar) to the “center” of a cluster, than to the
center of any other cluster
– The center of a cluster is often a centroid, the average of all
the points in the cluster, or a medoid, the most “representative”
point of a cluster
4 center-based clusters
8 contiguous clusters
● Density-based
– A cluster is a dense region of points, which is separated by
low-density regions, from other regions of high density.
– Used when the clusters are irregular or intertwined, and when
noise and outliers are present.
6 density-based clusters
2 Overlapping Circles
● Hierarchical clustering
● Density-based clustering
2.5
2
Original Points
1.5
y
1
0.5
3 3
2.5 2.5
2 2
1.5 1.5
y
y
1 1
0.5 0.5
0 0
Iteration 1
Iteration 2
Iteration 3
Iteration 4
Iteration 5
Iteration 6
3
2.5
1.5
y
0.5
2 2 2
y
1 1 1
0 0 0
2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2
x x x
2 2 2
y
1 1 1
0 0 0
2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2
x x x
Iteration 1
Iteration 2
Iteration 3
Iteration 4
Iteration 5
3
2.5
1.5
y
0.5
Iteration 1 Iteration 2
3 3
2.5 2.5
2 2
1.5 1.5
y
y
1 1
0.5 0.5
0 0
2 2 2
y
1 1 1
0 0 0
2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2
x x x
Iteration 1
Iteration 2
Iteration 3
Iteration 4
8
2
y
2
4
6
0 5 10 15 20
x
Starting with two initial centroids in one cluster of each pair of clusters
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 29
10 Clusters Example
Iteration 1 Iteration 2
8 8
6 6
4 4
2 2
y
y
0 0
2 2
4 4
6 6
0 5 10 15 20 0 5 10 15 20
x x
Iteration 3 Iteration 4
8 8
6 6
4 4
2 2
y
y
0 0
2 2
4 4
6 6
0 5 10 15 20 0 5 10 15 20
x x
Starting with two initial centroids in one cluster of each pair of clusters
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 30
10 Clusters Example
Iteration 1
Iteration 2
Iteration 3
Iteration 4
8
2
y
2
4
6
0 5 10 15 20
x
Starting with some pairs of clusters having three initial centroids, while other have only
one.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 31
10 Clusters Example
Iteration 1 Iteration 2
8 8
6 6
4 4
2 2
y
y
0 0
2 2
4 4
6 6
0 5 10 15 20 0 5 10 15 20
Iteration 3
x Iteration 4
x
8 8
6 6
4 4
2 2
y
y
0 0
2 2
4 4
6 6
0 5 10 15 20 0 5 10 15 20
x x
Starting with some pairs of clusters having three initial centroids, while other have only
one.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 32
Solutions to Initial Centroids
Problem
● Multiple runs
– Helps, but probability is not on your side
● Sample and use hierarchical clustering to
determine initial centroids
● Select more than k initial centroids and then
select among these initial centroids
– Select most widely separated
● Postprocessing
● Bisecting K-means
– Not as susceptible to initialization issues
● Several strategies
– Choose the point that contributes most to SSE
– Choose a point from the cluster with the highest SSE
– If there are several empty clusters, the above can be
repeated several times.
● Pre-processing
– Normalize the data
– Eliminate outliers
● Post-processing
– Eliminate small clusters that may represent outliers
– Split ‘loose’ clusters, i.e., clusters with relatively high
SSE
– Merge clusters that are ‘close’ and that have relatively
low SSE
– Can use these steps during the clustering process
ISODATA
6 5
0.2
4
3 4
2
0.15
5
2
0.1
1
0.05 1
3
0
1 3 2 5 4 6
– Divisive:
Start with one, all-inclusive cluster
At each step, split a cluster until each cluster contains a point (or
there are k clusters)
p2
p3
p4
p5
.
.
. Proximity Matrix
...
p1 p2 p3 p4 p9 p10 p11 p12
C2
C3
C3
C4
C4
C5
Proximity Matrix
C1
C2 C5
...
p1 p2 p3 p4 p9 p10 p11 p12
● We want to merge the two closest clusters (C2 and C5) and
update the proximity matrix. C1 C2 C3 C4 C5
C1
C2
C3
C3
C4
C4
C5
Proximity Matrix
C1
C2 C5
...
p1 p2 p3 p4 p9 p10 p11 p12
C1 ?
C2 U C5 ? ? ? ?
C3
C3 ?
C4
C4 ?
Proximity Matrix
C1
C2 U C5
...
p1 p2 p3 p4 p9 p10 p11 p12
p3
p4
p5
● MIN
.
● MAX
.
● Group Average .
Proximity Matrix
● Distance Between Centroids
● Other methods driven by an objective
function
– Ward’s Method uses squared error
p2
p3
p4
p5
● MIN
.
● MAX
.
● Group Average .
Proximity Matrix
● Distance Between Centroids
● Other methods driven by an objective
function
– Ward’s Method uses squared error
p2
p3
p4
p5
● MIN
.
● MAX
.
● Group Average .
Proximity Matrix
● Distance Between Centroids
● Other methods driven by an objective
function
– Ward’s Method uses squared error
p2
p3
p4
p5
● MIN
.
● MAX
.
● Group Average .
Proximity Matrix
● Distance Between Centroids
● Other methods driven by an objective
function
– Ward’s Method uses squared error
× × p2
p3
p4
p5
● MIN
.
● MAX
.
● Group Average .
Proximity Matrix
● Distance Between Centroids
● Other methods driven by an objective
function
– Ward’s Method uses squared error
I1 I2 I3 I4 I5
I1 1.00 0.90 0.10 0.65 0.20
I2 0.90 1.00 0.70 0.60 0.50
I3 0.10 0.70 1.00 0.40 0.30
I4 0.65 0.60 0.40 1.00 0.80
I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5
5
1
3
5 0.2
2 1
0.15
2 3 6 0.1
0.05
4
4 0
3 6 2 5 4 1
I1 I2 I3 I4 I5
I1 1.00 0.90 0.10 0.65 0.20
I2 0.90 1.00 0.70 0.60 0.50
I3 0.10 0.70 1.00 0.40 0.30
I4 0.65 0.60 0.40 1.00 0.80
I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5
4 1
2 5 0.4
0.35
5
2 0.3
0.25
3 6 0.2
3 0.15
1 0.1
4 0.05
0
3 6 4 1 2 5
5 4 1
2 0.25
5 0.2
2
0.15
3 6 0.1
1 0.05
4
0
3 3 6 4 1 2 5
● Strengths
– Less susceptible to noise and outliers
● Limitations
– Biased towards globular clusters
5
1 4 1
3
2 5
5 5
2 1 2
MIN MAX
2 3 6 3 6
3
1
4 4
4
5
1 5 4 1
2 2
5 Ward’s Method 5
2 2
3 6 Group Average 3 6
3
4 1 1
4 4
3
• Resistant to Noise
• Can handle clusters of different shapes and sizes
(MinPts=4, Eps=9.75).
Original Points
• Varying densities
• High-dimensional data
(MinPts=4, Eps=9.92)
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 81
DBSCAN: Determining EPS and MinPts
0.9 0.9
0.8 0.8
0.7 0.7
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
1 1
0.9 0.9
0.5 0.5
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 84
Different Aspects of Cluster Validation
● Two matrices
– Proximity Matrix
– “Incidence” Matrix
One row and one column for each data point
An entry is 1 if the associated pair of points belong to the same cluster
An entry is 0 if the associated pair of points belongs to different clusters
● Compute the correlation between the two matrices
– Since the matrices are symmetric, only the correlation between
n(n-1) / 2 entries needs to be calculated.
● High correlation indicates that points that belong to the
same cluster are close to each other.
● Not a good measure for some density or contiguity based
clusters.
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
y
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
1
1
10 0.9
0.9
20 0.8
0.8
30 0.7
0.7
40 0.6
0.6
Points
50 0.5
0.5
y
60 0.4
0.4
70 0.3
0.3
80 0.2
0.2
90 0.1
0.1
100 0
0 20 40 60 80 100 Similarity
0 0.2 0.4 0.6 0.8 1
Points
x
1 1
10 0.9 0.9
20 0.8 0.8
30 0.7 0.7
40 0.6 0.6
Points
50 0.5 0.5
y
60 0.4 0.4
70 0.3 0.3
80 0.2 0.2
90 0.1 0.1
100 0 0
20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1
Points x
DBSCAN
1 1
10 0.9 0.9
20 0.8 0.8
30 0.7 0.7
40 0.6 0.6
Points
50 0.5 0.5
y
60 0.4 0.4
70 0.3 0.3
80 0.2 0.2
90 0.1 0.1
100 0 0
20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1
Points x
K-means
1 1
10 0.9 0.9
20 0.8 0.8
30 0.7 0.7
40 0.6 0.6
Points
50 0.5 0.5
y
60 0.4 0.4
70 0.3 0.3
80 0.2 0.2
90 0.1 0.1
100 0 0
20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1
Points x
Complete Link
0.9
1 500
2 0.8
6
0.7
1000
3 0.6
4
1500 0.5
0.4
2000
0.3
5
0.2
2500
0.1
7
3000 0
500 1000 1500 2000 2500 3000
DBSCAN
6 9
8
4
7
2 6
SSE
5
0
4
2 3
2
4
1
6 0
2 5 10 15 20 25 30
5 10 15
K
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 94
Internal Measures: SSE
1
2 6
3
4
● Example
– Compare SSE of 0.005 against three clusters in random data
– Histogram shows SSE of three clusters in 500 sets of random data
points of size 100 distributed over the range 0.2 – 0.8 for x and y
values
1
50
0.9
45
0.8
40
0.7
35
0.6
30
Count
0.5
y
25
0.4
20
0.3
15
0.2
10
0.1
5
0
0 0.2 0.4 0.6 0.8 1 0
0.016 0.018 0.02 0.022 0.024 0.026 0.028 0.03 0.032 0.034
x SSE
1 1
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
BSS = ∑ Ci ( m − mi ) 2
i
– Where |Ci| is the size of cluster i
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 99
Internal Measures: Cohesion and
Separation
● Example: SSE
– BSS + WSS = constant
m
× × ×
1 m1 2 3 4 m2 5
cohesion separation
b
– Typically between 0 and 1. a
– The closer to 1 the better.