Data Mining - Cluster Analysis Basic Concepts and Algorithms
Data Mining - Cluster Analysis Basic Concepts and Algorithms
Inter-cluster
Intra-cluster distances are
distances are maximized
minimized
4 Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP,
Schlumberger-UP
Oil-UP
Summarization
– Reduce the size of large
data sets
Clustering precipitation
in Australia
– Hierarchical clustering
A set of nested clusters organized as a hierarchical tree
p1
p3 p4
p2
p1 p2 p3 p4
p1
p3 p4
p2
p1 p2 p3 p4
Well-separated clusters
Prototype-based clusters
Contiguity-based clusters
Density-based clusters
Well-Separated Clusters:
– A cluster is a set of points such that any point in a cluster is
closer (or more similar) to every other point in the cluster than
to any point not in the cluster.
3 well-separated clusters
Prototype-based
– A cluster is a set of objects such that an object in a cluster is
closer (more similar) to the prototype or “center” of a cluster,
than to the center of any other cluster
– The center of a cluster is often a centroid, the average of all
the points in the cluster, or a medoid, the most “representative”
point of a cluster
4 center-based clusters
8 contiguous clusters
Density-based
– A cluster is a dense region of points, which is separated by
low-density regions, from other regions of high density.
– Used when the clusters are irregular or intertwined, and when
noise and outliers are present.
6 density-based clusters
Hierarchical clustering
Density-based clustering
2.5
1.5
y
0.5
2 2 2
y
1 1 1
0 0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
2 2 2
y
1 1 1
0 0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
2.5
2
Original Points
1.5
y
1
0.5
3 3
2.5 2.5
2 2
1.5 1.5
y
y
1 1
0.5 0.5
0 0
2.5
1.5
y
0.5
Iteration 1 Iteration 2
3 3
2.5 2.5
2 2
1.5 1.5
y
y
1 1
0.5 0.5
0 0
2 2 2
y
1 1 1
0 0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
Depending on the
choice of initial
centroids, B and C
may get merged or
remain separate
Iteration 4
1
2
3
8
2
y
-2
-4
-6
0 5 10 15 20
x
Starting with two initial centroids in one cluster of each pair of clusters
3/24/2021 Introduction to Data Mining, 2nd Edition 27
Tan, Steinbach, Karpatne, Kumar
10 Clusters Example
Iteration 1 Iteration 2
8 8
6 6
4 4
2 2
y
y
0 0
-2 -2
-4 -4
-6 -6
0 5 10 15 20 0 5 10 15 20
x x
Iteration 3 Iteration 4
8 8
6 6
4 4
2 2
y
y
0 0
-2 -2
-4 -4
-6 -6
0 5 10 15 20 0 5 10 15 20
x x
Starting with two initial centroids in one cluster of each pair of clusters
3/24/2021 Introduction to Data Mining, 2nd Edition 28
Tan, Steinbach, Karpatne, Kumar
10 Clusters Example
Iteration 4
1
2
3
8
2
y
-2
-4
-6
0 5 10 15 20
x
Starting with some pairs of clusters having three initial centroids, while other
have only one.
6 6
4 4
2 2
y
y
0 0
-2 -2
-4 -4
-6 -6
0 5 10 15 20 0 5 10 15 20
Iteration
x 3 Iteration
x 4
8 8
6 6
4 4
2 2
y
y
0 0
-2 -2
-4 -4
-6 -6
0 5 10 15 20 0 5 10 15 20
x x
Starting with some pairs of clusters having three initial centroids, while other have only
one.
3/24/2021 Introduction to Data Mining, 2nd Edition 30
Tan, Steinbach, Karpatne, Kumar
Solutions to Initial Centroids Problem
Multiple runs
– Helps, but probability is not on your side
Use some strategy to select the k initial centroids
and then select among these initial centroids
– Select most widely separated
K-means++ is a robust way of doing this selection
– Use hierarchical clustering to determine initial
centroids
Bisecting K-means
– Not as susceptible to initialization issues
CLUTO: http://glaros.dtc.umn.edu/gkhome/cluto/cluto/overview
One solution is to find a large number of clusters such that each of them represents a part of a
natural cluster. But these small clusters need to be put together in a post-processing step .
One solution is to find a large number of clusters such that each of them represents a part of a
natural cluster. But these small clusters need to be put together in a post-processing step.
One solution is to find a large number of clusters such that each of them represents a part
of a natural cluster. But these small clusters need to be put together in a post-processing
step.
6 5
0.2
4
3 4
0.15
2
5
2
0.1
1
0.05 1
3
0
1 3 2 5 4 6
– Divisive:
Start with one, all-inclusive cluster
At each step, split a cluster until each cluster contains an individual
point (or there are k clusters)
p2
p3
p4
p5
.
.
. Proximity Matrix
...
p1 p2 p3 p4 p9 p10 p11 p12
C2
C3
C3
C4
C4
C5
Proximity Matrix
C1
C2 C5
...
p1 p2 p3 p4 p9 p10 p11 p12
We want to merge the two closest clusters (C2 and C5) and
update the proximity matrix. C1 C2 C3 C4 C5
C1
C2
C3
C3
C4
C4
C5
Proximity Matrix
C1
C2 C5
...
p1 p2 p3 p4 p9 p10 p11 p12
C1 ?
C2 U C5 ? ? ? ?
C3
C3 ?
C4
C4 ?
Proximity Matrix
C1
C2 U C5
...
p1 p2 p3 p4 p9 p10 p11 p12
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
p2
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
p2
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
p2
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
p2
p3
p4
p5
MIN
.
MAX .
Group Average .
Proximity Matrix
Distance Between Centroids
Other methods driven by an objective
function
– Ward’s Method uses squared error
5
1
3
5 0.2
2 1 0.15
2 3 6 0.1
0.05
4
4 0
3 6 2 5 4 1
Two Clusters
Original Points
• Sensitive to noise
Three Clusters
Distance Matrix:
4 1
2 5 0.4
0.35
5
2 0.3
0.25
3 6 0.2
3 0.15
1 0.1
4 0.05
0
3 6 4 1 2 5
p jClusterj
proximity(Clusteri , Clusterj )
|Clusteri ||Clusterj |
Distance Matrix:
5 4 1
2 0.25
5 0.2
2
0.15
3 6 0.1
1 0.05
4 0
3 3 6 4 1 2 5
Strengths
– Less susceptible to noise
Limitations
– Biased towards globular clusters
5
1 4 1
3
2 5
5 5
2 1 2
MIN MAX
2 3 6 3 6
3
1
4 4
4
5
1 5 4 1
2 2
5 Ward’s Method 5
2 2
3 6 Group Average 3 6
3
4 1 1
4 4
3
MinPts = 7
Original Points
(MinPts=4, Eps=9.92).
Original Points
• Varying densities
• High-dimensional data
(MinPts=4, Eps=9.75)
3/24/2021 Introduction to Data Mining, 2nd Edition 77
Tan, Steinbach, Karpatne, Kumar
DBSCAN: Determining EPS and MinPts
0.9 0.9
0.8 0.8
0.7 0.7
0.5
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
1 1
0.9 0.9
0.5 0.5
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
3/24/2021 Introduction to Data Mining, 2nd Edition 80
Tan, Steinbach, Karpatne, Kumar
Measures of Cluster Validity
Numerical measures that are applied to judge various aspects
of cluster validity, are classified into the following two types.
– Supervised: Used to measure the extent to which cluster labels
match externally supplied class labels.
Entropy
Often called external indices because they use information external to the data
– Unsupervised: Used to measure the goodness of a clustering
structure without respect to external information.
Sum of Squared Error (SSE)
Often called internal indices because they only use information in the data
𝑖 𝑥 ∈𝐶
– Separation 𝑖is measured by the between cluster sum of squares
𝑆𝑆𝐵=∑ |𝐶 𝑖|( 𝑚− 𝑚𝑖 )
2
𝑖
Where is the size of cluster i
Example: SSE
– SSB + SSE = constant
m
1 m1 2 3 4 m2 5
cohesion separation
Points
0.5 50 0.5 50 0.5
y
0 100 0 100 0
0 0.2 0.4 0.6 0.8 1 20 40 60 80 100Similarity 20 40 60 80 100 Similarity
x Points Points
Corr = 0.9235
Points
0.5 50 0.5 50 0.5
y
0 100 0 100 0
0 0.2 0.4 0.6 0.8 1 20 40 60 80 100Similarity 20 40 60 80 100 Similarity
x Points Points
1
1
10 0.9
0.9
20 0.8
0.8
30 0.7
0.7
40 0.6
0.6
Points
50 0.5
0.5
y
60 0.4
0.4
70 0.3
0.3
80 0.2
0.2
90 0.1
0.1
100 0
0 20 40 60 80 100Similarity
0 0.2 0.4 0.6 0.8 1
Points
x
1
1
0.9 10 0.9
0.8 20 0.8
0.7 30 0.7
0.6 40 0.6
Points
0.5
y
50 0.5
0.4 60 0.4
0.3 70 0.3
0.2 80 0.2
0.1 90 0.1
0 100 0
0 0.2 0.4 0.6 0.8 1 20 40 60 80 100Similarity
x Points
DBSCAN
0.9
500
0.8
1 0.7
2 6 1000
0.6
3
4
1500 0.5
0.4
5 2000
0.3
7
0.2
2500
0.1
3000 0
500 1000 1500 2000 2500 3000
DBSCAN
10
6 9
8
4
7
2 6
SSE
0 5
4
-2
3
-4 2
-6 1
5 10 15 0
2 5 10 15 20 25 30
K
1
2 6
3
4
Example
– Compare SSE of three cohesive clusters against three clusters in
random data
1
50
0.9
45
0.8
40
0.7
35
0.6
30
Count
0.5
y
25
0.4
20
0.3
15
0.2
10
0.1
5
0
0 0.2 0.4 0.6 0.8 1 0
0.016 0.018 0.02 0.022 0.024 0.026 0.028 0.03 0.032 0.034
x SSE
SSE = 0.005 Histogram shows SSE of three clusters in 500 sets of random
data points of size 100 distributed over the range 0.2 – 0.8 for
x and y values
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
y
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x