Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
Lecture #12
Clustering Techniques: Similarity Measures
Topics to be covered…
 Introduction to clustering
 Similarity and dissimilarity measures
 Clustering techniques
 Partitioning algorithms
 Hierarchical algorithms
 Density-based algorithm
2
Introduction to Clustering
3
 Classification consists of assigning a class label to a set of unclassified
cases.
 Supervised Classification
 The set of possible classes is known in advance.
 Unsupervised Classification
 Set of possible classes is not known. After classification we can try to assign a
name to that class.
 Unsupervised classification is called clustering.
Supervised Technique
CS 40003: Data Analytics 4
Unsupervised Technique
5
Introduction to Clustering
 Clustering is somewhat related to classification in the sense that in both
cases data are grouped.
•
 However, there is a major difference between these two techniques.
 In order to understand the difference between the two, consider a sample
dataset containing marks obtained by a set of students and corresponding
grades as shown in Table 15.1.
6
Introduction to Clustering
Table 12.1: Tabulation of Marks
7
Roll No Mark Grade
1 80 A
2 70 A
3 55 C
4 91 EX
5 65 B
6 35 D
7 76 A
8 40 D
9 50 C
10 85 EX
11 25 F
12 60 B
13 45 D
14 95 EX
15 63 B
16 88 A
Figure 12.1: Group representation of
dataset in Table 15.1
15 12
5 11
14 10
4
6 13
8
16 7
1 2
3
9
B
F
EX
D
C
A
Introduction to Clustering
 It is evident that there is a simple mapping between Table 12.1 and Fig 12.1.
 The fact is that groups in Fig 12.1 are already predefined in Table 12.1. This
is similar to classification, where we have given a dataset where groups of
data are predefined.
 Consider another situation, where ‘Grade’ is not known, but we have to
make a grouping.
 Put all the marks into a group if any other mark in that group does not
exceed by 5 or more.
 This is similar to “Relative grading” concept and grade may range from A
to Z.
8
Introduction to Clustering
 Figure 12.2 shows another grouping by means of another simple mapping,
but the difference is this mapping does not based on predefined classes.
 In other words, this grouping is
accomplished by finding similarities
between data according to
characteristics found in the actual
data.
 Such a group making is called
clustering.
Introduction to Clustering
Example 12.1 : The task of clustering
In order to elaborate the clustering task, consider the following dataset.
Table 12.2: Life Insurance database
With certain similarity or likeliness defined, we can classify the records to
one or group of more attributes (and thus mapping being non-trivial).
10
Martial
Status
Age Income Education Number of
children
Single 35 25000 Under Graduate 3
Married 25 15000 Graduate 1
Single 40 20000 Under Graduate 0
Divorced 20 30000 Post-Graduate 0
Divorced 25 20000 Under Graduate 3
Married 60 70000 Graduate 0
Married 30 90000 Post-Graduate 0
Married 45 60000 Graduate 5
Divorced 50 80000 Under Graduate 2
 Clustering has been used in many application domains:
 Image analysis
 Document retrieval
 Machine learning, etc.
 When clustering is applied to real-world database, many problems may
arise.
1. The (best) number of cluster is not known.
 There is not correct answer to a clustering problem.
 In fact, many answers may be found.
 The exact number of cluster required is not easy to determine.
11
Introduction to Clustering
2. There may not be any a priori knowledge concerning the clusters.
• This is an issue that what data should be used for clustering.
• Unlike classification, in clustering, we have not supervisory learning to aid
the process.
• Clustering can be viewed as similar to unsupervised learning.
3. Interpreting the semantic meaning of each cluster may be difficult.
• With classification, the labeling of classes is known ahead of time. In
contrast, with clustering, this may not be the case.
• Thus, when the clustering process is finished yielding a set of clusters, the
exact meaning of each cluster may not be obvious.
12
Introduction to Clustering
Definition of Clustering Problem
13
Given a database D = 𝑡1, 𝑡2, … . . , 𝑡𝑛 of 𝑛 tuples, the clustering problem is to
define a mapping 𝑓 ∶ D 𝐶, where each 𝑡𝑖 ∈ 𝐷 is assigned to one cluster 𝑐𝑖 ∈
𝐶. Here, C = 𝑐1, 𝑐2, … . . , 𝑐𝑘 denotes a set of clusters.
Definition 12.1: Clustering
• Solution to a clustering problem is devising a mapping formulation.
• The formulation behind such a mapping is to establish that a tuple within
one cluster is more like tuples within that cluster and not similar to tuples
outside it.
Definition of Clustering Problem
14
• Hence, mapping function f in Definition 12.1 may be explicitly stated as
𝑓 ∶ D 𝑐1, 𝑐2, … . . , 𝑐𝑘
where i) each 𝑡𝑖 ∈ 𝐷 is assigned to one cluster 𝑐𝑖 ∈ 𝐶.
ii) for each cluster 𝑐𝑖 ∈ 𝐶, and for all 𝑡𝑖𝑝 , 𝑡𝑖𝑞 ∈ 𝑐𝑖 and there exist 𝑡𝑗 ∉ 𝑐𝑖 such that
similarity (𝑡𝑖𝑝 , 𝑡𝑖𝑞 ) > similarity (𝑡𝑖𝑝 , 𝑡𝑗 ) AND similarity (𝑡𝑖𝑞 , 𝑡𝑗 )
• In the field of cluster analysis, this similarity plays an important part.
• Now, we shall learn how similarity (this is also alternatively judged as “dissimilarity”)
between any two data can be measured.
Similarity Measures (pptx)
Similarity and Dissimilarity Measures
16
• In clustering techniques, similarity (or dissimilarity) is an important measurement.
• Informally, similarity between two objects (e.g., two images, two documents, two
records, etc.) is a numerical measure of the degree to which two objects are alike.
• The dissimilarity on the other hand, is another alternative (or opposite) measure of
the degree to which two objects are different.
• Both similarity and dissimilarity also termed as proximity.
• Usually, similarity and dissimilarity are non-negative numbers and may range from
zero (highly dissimilar (no similar)) to some finite/infinite value (highly similar (no
dissimilar)).
Note:
• Frequently, the term distance is used as a synonym for dissimilarity
• In fact, it is used to refer as a special case of dissimilarity.
Proximity Measures: Single-Attribute
17
• Consider an object, which is defined by a single attribute A (e.g., length) and the
attribute A has n-distinct values 𝑎1, 𝑎2, … . . , 𝑎𝑛.
• A data structure called “Dissimilarity matrix” is used to store a collection of
proximities that are available for all pair of n attribute values.
• In other words, the Dissimilarity matrix for an attribute A with n values is represented by
an 𝑛 × 𝑛 matrix as shown below.
0
𝑝(2,1) 0
𝑝(3,1)
⋮
𝑝(𝑛,1)
𝑝(3,2)
⋮
𝑝(𝑛,2)
0
⋮
… … 0 𝑛×𝑛
• Here, 𝑝(𝑖,𝑗) denotes the proximity measure between two objects with attribute values 𝑎𝑖
and 𝑎𝑗.
• Note: The proximity measure is symmetric, that is, 𝑝(𝑖,𝑗) = 𝑝(𝑗,𝑖)
Proximity Calculation
 Proximity calculation to compute 𝑝(𝑖,𝑗) is different for different types of
attributes according to NOIR topology.
Proximity calculation for Nominal attributes:
• For example, binary attribute, Gender = {Male, female} where Male is
equivalent to binary 1 and female is equivalent to binary 0.
• Similarity value is 1 if the two objects contains the same attribute value, while
similarity value is 0 implies objects are not at all similar.
18
Object Gender
Ram Male
Sita Female
Laxman Male
• Here, Similarity value let it be denoted by 𝑝, among
different objects are as follows.
𝑝 𝑅𝑎𝑚, 𝑠𝑖𝑡𝑎 = 0
𝑝 𝑅𝑎𝑚, 𝐿𝑎𝑥𝑚𝑎𝑛 = 1
Note : In this case, if 𝑞 denotes the dissimilarity between two objects 𝑖 𝑎𝑛𝑑 𝑗 with
single binary attributes, then 𝑞(𝑖,𝑗)= 1 − 𝑝(𝑖,𝑗)
Proximity Calculation
19
• Now, let us focus on how to calculate proximity measures between objects which are
defined by two or more binary attributes.
• Suppose, the number of attributes be 𝑏. We can define the contingency table
summarizing the different matches and mismatches between any two objects
𝑥 and 𝑦, which are as follows.
Object
𝑥
Object y
1 0
1 𝑓11 𝑓10
0 𝑓01 𝑓00
Table 12.3: Contingency table with binary attributes
Here, 𝑓11= the number of attributes where 𝑥=1 and 𝑦=1.
𝑓10= the number of attributes where 𝑥=1 and 𝑦=0.
𝑓01= the number of attributes where 𝑥=0 and 𝑦=1.
𝑓00= the number of attributes where 𝑥=0 and 𝑦=0.
Note : 𝑓00 + 𝑓01 + 𝑓10 + 𝑓11 = 𝑏, the total number of binary attributes.
Now, two cases may arise: symmetric and asymmetric binary attributes.
Similarity Measure with Symmetric Binary
20
• To measure the similarity between two objects defined by symmetric binary
attributes using a measure called symmetric binary coefficient and denoted as 𝒮 and
defined below
𝒮 =
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑎𝑡𝑐ℎ𝑖𝑛𝑔 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒 𝑣𝑎𝑙𝑢𝑒𝑠
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑠
or
𝒮 =
𝑓00 + 𝑓11
𝑓00 + 𝑓01 + 𝑓10 + 𝑓11
The dissimilarity measure, likewise can be denoted as 𝒟 and defined as
𝒟 =
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑖𝑠𝑚𝑎𝑡𝑐ℎ𝑒𝑑 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒 𝑣𝑎𝑙𝑢𝑒𝑠
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑠
or
𝒟 =
𝑓01 + 𝑓10
𝑓00 + 𝑓01 + 𝑓10 + 𝑓11
Note that, 𝒟 = 1 − 𝒮
Similarity Measure with Symmetric Binary
21
Example 12.2: Proximity measures with symmetric binary attributes
Consider the following two dataset, where objects are defined with symmetric binary
attributes.
Gender = {M, F}, Food = {V, N}, Caste = {H, M}, Education = {L, I},
Hobby = {T, C}, Job = {Y, N}
Object Gender Food Caste Education Hobby Job
Hari M V M L C N
Ram M N M I T N
Tomi F N H L C Y
𝒮(Hari, Ram) =
1+2
1+2+1+2
= 0.5
Proximity Measure with Asymmetric Binary
22
• Such a similarity measure between two objects defined by asymmetric binary
attributes is done by Jaccard Coefficient and which is often symbolized by 𝒥 is given
by the following equation
𝒥=
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑎𝑡𝑐ℎ𝑖𝑛𝑔 𝑝𝑟𝑒𝑠𝑒𝑛𝑐𝑒
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑠 𝑛𝑜𝑡 𝑖𝑛𝑣𝑜𝑙𝑣𝑒𝑑 𝑖𝑛 00 𝑚𝑎𝑡𝑐ℎ𝑖𝑛𝑔
or
𝒥 =
𝑓11
𝑓01 + 𝑓10 + 𝑓11
23
Example 12.3: Jaccard Coefficient
Consider the following two dataset.
Gender = {M, F}, Food = {V, N}, Caste = {H, M}, Education = {L, I},
Hobby = {T, C}, Job = {Y, N}
Calculate the Jaccard coefficient between Ram and Hari assuming that all binary
attributes are asymmetric and for each pair values for an attribute, first one is more
frequent than the second.
Object Gender Food Caste Education Hobby Job
Hari M V M L C N
Ram M N M I T N
Tomi F N H L C Y
𝒥(Hari, Ram) =
1
2+1+1
= 0.25
Note: 𝒥(Ram, Tomi) = 0 and 𝒥(Hari, Ram) = 𝒥(Ram, Hari), etc.
Proximity Measure with Asymmetric Binary
24
Example 12.4:
Consider the following two dataset.
Gender = {M, F}, Food = {V, N}, Caste = {H, M}, Education = {L, I},
Hobby = {T, C}, Job = {Y, N}
Object Gender Food Caste Education Hobby Job
Hari M V M L C N
Ram M N M I T N
Tomi F N H L C Y
How you can calculate similarity if Gender, Hobby and Job are symmetric
binary attributes and Food, Caste, Education are asymmetric binary
attributes?
Obtain the similarity matrix with Jaccard coefficient of objects for the above, e.g.
?
25
• Binary attribute is a special kind of nominal attribute where the attribute has values
with two states only.
• On the other hand, categorical attribute is another kind of nominal attribute where it
has values with three or more states (e.g. color = {Red, Green, Blue}).
• If 𝓈 𝑥, 𝑦 denotes the similarity between two objects 𝑥 𝑎𝑛𝑑 𝑦, then
𝓈 𝑥, 𝑦 =
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑎𝑡𝑐ℎ𝑒𝑠
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑠
and the dissimilarity 𝒹(𝑥, 𝑦) is
𝒹(𝑥, 𝑦) =
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑖𝑠𝑚𝑎𝑡𝑐ℎ𝑒𝑠
𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑠
• If 𝑚 = number of matches and 𝑎 = number of categorical attributes with which
objects are defined as
𝓈 𝑥, 𝑦 =
𝑚
𝑎
and 𝒹 𝑥, 𝑦 =
𝑎−𝑚
𝑎
Proximity Measure with Categorical Attribute
26
Example 12.4:
Object Color Position Distance
1 R L L
2 B C M
3 G R M
4 R L H
The similarity matrix considering only color attribute is shown below
Dissimilarity matrix, 𝒹 =?
Obtain the dissimilarity matrix considering both the categorical attributes (i.e.
color and position).
Proximity Measure with Categorical Attribute
27
• Ordinal attribute is a special kind of categorical attribute, where the values of
attribute follows a sequence (ordering) e.g. Grade = {Ex, A, B, C} where Ex > A >B >C.
• Suppose, A is an attribute of type ordinal and the set of values of
𝐴 = {𝑎1, 𝑎2, … . . , 𝑎𝑛}. Let 𝑛 values of 𝐴 are ordered in ascending order as 𝑎1< 𝑎2 <. . <
𝑎𝑛. Let i-th attribute value 𝑎𝑖 be ranked as i, i=1,2,..n.
• The normalized value of 𝑎𝑖 can be expressed as
𝑎𝑖 =
𝑖 − 1
𝑛 − 1
• Thus, normalized values lie in the range [0. . 1].
• As 𝑎𝑖 is a numerical value, the similarity measure, then can be calculated using any
similarity measurement method for numerical attribute.
• For example, the similarity measure between two objects 𝑥 𝑎𝑛𝑑 𝑦 with attribute
values 𝑎𝑖 and 𝑎𝑗, then can be expressed as
𝓈 𝑥, 𝑦 = (𝑎𝑖 − 𝑎𝑗)2
where 𝑎𝑖 and 𝑎𝑖 are the normalized values of 𝑎𝑖 and 𝑎𝑖 , respectively.
Proximity Measure with Ordinal Attribute
28
Example 12.5:
Consider the following set of records, where each record is defined by two ordinal attributes
size={S, M, L} and Quality = {Ex, A, B, C} such that S<M<L and Ex>A>B>C.
Object Size Quality
A S (0.0) A (0.66)
B L (1.0) Ex (1.0)
C L (1.0) C (0.0)
D M (0.5) B (0.33)
• Normalized values are shown in brackets.
• Their similarity measures are shown in the similarity matrix below.
?
Find the dissimilarity matrix, when each object is defined by only one ordinal
attribute say size (or quality).
𝐴
𝐵
𝐶
𝐷
0 0 0
0 0
0
0
0
0
0
𝐴 𝐵 𝐶 D
Proximity Measure with Ordinal Attribute
29
• The measure called distance is usually referred to estimate the similarity between
two objects defined with interval-scaled attributes.
• We first present a generic formula to express distance d between two objects
𝑥 𝑎𝑛𝑑 𝑦 in n-dimensional space. Suppose, 𝑥𝑖 𝑎𝑛𝑑 𝑦𝑖 denote the values of ith
attribute of the objects 𝑥 𝑎𝑛𝑑 𝑦 respectively.
𝑑 𝑥, 𝑦 =
𝑖=1
𝑛
𝑥𝑖 − 𝑦𝑖
𝑟
1
𝑟
• Here, 𝑟 is any integer value.
• This distance metric most popularly known as Minkowski metric.
• This distance measure follows some well-known properties. These are mentioned
in the next slide.
Proximity Measure with Interval Scale
30
Properties of Minkowski metrics:
1. Non-negativity:
a. 𝑑 𝑥, 𝑦 ≥ 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥 𝑎𝑛𝑑 𝑦
b. 𝑑 𝑥, 𝑦 = 0 only if 𝑥 = 𝑦. This is also called identity condition.
2. Symmetry:
𝑑 𝑥, 𝑦 = 𝑑 𝑦, 𝑥 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥 𝑎𝑛𝑑 𝑦
This condition ensures that the order in which objects are considered is not important.
3. Transitivity:
𝑑 𝑥, 𝑧 ≤ 𝑑 𝑥, 𝑦 + 𝑑 𝑦, 𝑧 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥 , 𝑦 𝑎𝑛𝑑 𝑧.
• This condition has the interpretation that the least distance 𝑑 𝑥, 𝑧 between objects
𝑥 𝑎𝑛𝑑 𝑧 is always less than or equal to the sum of the distance between the objects
𝑥 𝑎𝑛𝑑 𝑦, and between 𝑦 𝑎𝑛𝑑 𝑧.
• This property is also termed as Triangle Inequality.
Proximity Measure with Interval Scale
31
Depending on the value of 𝑟, the distance measure is renamed accordingly.
1. Manhattan distance (L1 Norm: 𝒓 = 1)
The Manhattan distance is expressed as
𝑑 =
𝑖=1
𝑛
𝑥𝑖 − 𝑦𝑖
where … denotes the absolute value. This metric is also alternatively termed as Taxicals
metric, city-block metric.
Example: x = [7, 3, 5] and y = [3, 2, 6].
The Manhattan distance is 7 − 3 + 3 − 2 + 5 − 6 = 6.
• As a special instance of Manhattan distance, when attribute values ∈ [0, 1] is called
Hamming distance.
• Alternatively, Hamming distance is the number of bits that are different between two
objects that have only binary values (i.e. between two binary vectors).
Proximity Measure with Interval Scale
32
2. Euclidean Distance (L2 Norm: 𝒓 = 2)
This metric is same as Euclidean distance between any two points 𝑥 𝑎𝑛𝑑 𝑦 𝑖𝑛 ℛ𝑛.
𝑑(𝑥, 𝑦) =
𝑖=1
𝑛
𝑥𝑖 − 𝑦𝑖
2
Example: x = [7, 3, 5] and y = [3, 2, 6].
The Euclidean distance between 𝑥 𝑎𝑛𝑑 𝑦 is
𝑑 𝑥, 𝑦 = 7 − 3 2 + 3 − 2 2 + 5 − 6 2 = 18 ≈ 2.426
Proximity Measure with Interval Scale
33
3. Chebychev Distance (L∝
Norm: 𝒓 ∈ ℛ)
This metric is defined as
𝑑(𝑥, 𝑦) = max
∀𝑖
𝑥𝑖 − 𝑦𝑖
• We may clearly note the difference between Chebychev metric and Manhattan distance.
That is, instead of summing up the absolute difference (in Manhattan distance), we simply
take the maximum of the absolute differences (in Chebychev distance). Hence, L∝
< L𝟏
Example: x = [7, 3, 5] and y = [3, 2, 6].
The Manhattan distance = 7 − 3 + 3 − 2 + 5 − 6 = 6.
The chebychev distance = Max { 7 − 3 , 3 − 2 , 5 − 6 } = 4.
Proximity Measure with Interval Scale
34
4. Other metrics:
a. Canberra metric:
𝑑(𝑥, 𝑦) =
𝑖=1
𝑛
𝑥𝑖 − 𝑦𝑖
𝑥𝑖 + 𝑦𝑖
𝑞
• where q is a real number. Usually q = 1, because numerator of the ratio is always ≤
denominator, the ratio ≤ 1, that is, the sum is always bounded and small.
• If q ≠ 1, it is called Fractional Canberra metric.
• If q > 1, the oppositive relationship holds.
b. Hellinger metric:
𝑑(𝑥, 𝑦) =
𝑖=1
𝑛
𝑥𝑖 − 𝑦𝑖
2
This metric is then used as either squared or transformed into an acceptable range [-1, +1]
using the following transformations.
i. 𝑑 𝑥, 𝑦 = (1 − 𝑟(𝑥, 𝑦)) 2
ii. 𝑑 𝑥, 𝑦 = 1 − 𝑟 𝑥, 𝑦
Where 𝑟 𝑥, 𝑦 is correlation coefficient between 𝑥 𝑎𝑛𝑑 𝑦.
Note: Dissimilarity measurement is not relevant with distance measurement.
Proximity Measure with Interval Scale
Proximity Measure for Ratio-Scale
35
The proximity between the objects with ratio-scaled variable can be carried with the
following steps:
1. Apply the appropriate transformation to the data to bring it into a linear scale. (e.g.
logarithmic transformation to data of the form 𝑋 = 𝐴𝑒𝐵.
2. The transformed values can be treated as interval-scaled values. Any distance measure
discussed for interval-scaled variable can be applied to measure the similarity.
Note:
There are two concerns on proximity measures:
• Normalization of the measured values.
• Intra-transformation from similarity to dissimilarity measure and vice-versa.
Proximity Measure for Ratio-Scale
36
Normalization:
• A major problem when using the similarity (or dissimilarity) measures (such as Euclidean
distance) is that the large values frequently swamp the small ones.
• For example, consider the following data.
Here, the contribution of Cost 2 and Cost 3 is insignificant compared to Cost 1 so far the
Euclidean distance is concerned.
• This problem can be avoided if we consider the normalized values of all numerical
attributes.
• Another normalization may be to take the estimated values in a normalized range say [0, 1].
Note that, if a measure varies in the range, then it can be normalized as
𝓈′ =
1
1+𝓈
where 𝓈 ∈ 0. . ∞
37
Intra-transformation:
• Transforming similarities to dissimilarities and vice-versa is also relatively straightforward.
• If the similarity (or dissimilarity) falls in the interval [0..1], the dissimilarity (or similarity)
can be obtained as
𝑑 = 1 − 𝓈
or
𝓈 = 1 − 𝑑
• Another approach is to define similarity as the negative of dissimilarity ( or vice-versa).
Proximity Measure for Ratio-Scale
Proximity Measure with Mixed Attributes
38
• The previous metrics on similarity measures assume that all the attributes were of
the same type. Thus, a general approach is needed when the attributes are of
different types.
• One straightforward approach is to compute the similarity between each attribute
separately and then combine these attribute using a method that results in a
similarity between 0 and 1.
• Typically, the overall similarity is defined as the average of all the individual
attribute similarities.
• See the algorithm in the next slide for doing this.
Similarity Measure with Vector Objects
39
Suppose, the objects are defined with 𝐴1, 𝐴2, … . . , 𝐴𝑛 attributes.
1. For the k-th attribute (k = 1, 2, . . , n), compute similarity 𝓈𝑘(𝑥, 𝑦) in the range [0, 1].
2. Compute the overall similarity between two objects using the following formula
𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 𝑥, 𝑦 =
𝑖=1
𝑛
𝓈𝑖(𝑥, 𝑦)
𝑛
3. The above formula can be modified by weighting the contribution of each attribute. If the
weight 𝑤𝑘 is for the k-th attribute, then
𝑤_𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 𝑥, 𝑦 =
𝑖=1
𝑛
𝑤𝑖𝓈𝑖(𝑥, 𝑦)
𝑛
Such that 𝑖=1
𝑛
𝑤𝑖 = 1.
4. The definition of the Minkowski distance can also be modified as follows:
𝑑 𝑥, 𝑦 =
𝑖=1
𝑛
𝑤𝑖 𝑥𝑖 − 𝑦𝑖
𝑟
1
𝑟
Each symbols are having their usual meanings.
Similarity Measure with Mixed Attributes
40
Example 12.6:
Consider the following set of objects. Obtain the similarity matrix.
[For C: X>A>B>C]
Object
A
(Binary)
B
(Categorical)
C
(Ordinal)
D
(Numeric)
E
(Numeric)
1 Y R X 475 108
2 N R A 10 10-2
3 N B C 1000 105
4 Y G B 500 103
5 Y B A 80 1
How cosine similarity can be applied to this?
Non-Metric similarity
41
• In many applications (such as information retrieval) objects are complex and contains a
large number of symbolic entities (such as keywords, phrases, etc.).
• To measure the distance between complex objects, it is often desirable to introduce a non-
metric similarity function.
• Here, we discuss few such non-metric similarity measurements.
Cosine similarity
Suppose, x and y denote two vectors representing two complex objects. The cosine similarity
denoted as cos(𝑥, 𝑦) and defined as
cos(𝑥, 𝑦) =
𝑥 ⋅ 𝑦
𝑥 ⋅ 𝑦
• where 𝑥 ⋅ 𝑦 denotes the vector dot product, namely 𝑥 ⋅ 𝑦 = 𝑖=1
𝑛
𝑥𝑖 ⋅ 𝑦𝑖 such that 𝑥 =
[𝑥1, 𝑥2, . . , 𝑥𝑛] and 𝑦 = [𝑦1, 𝑦2, . . , 𝑦𝑛].
• 𝑥 and 𝑦 denote the Euclidean norms of vector x and y, respectively (essentially the
length of vectors x and y), that is
𝑥 = 𝑥1
2
+ 𝑥2
2
+. . +𝑥𝑛
2
and 𝑦 = 𝑦1
2
+ 𝑦2
2
+. . +𝑦𝑛
2
Cosine Similarity
42
• In fact, cosine similarity essentially is a measure of the (cosine of the)
angle between x and y.
• Thus if the cosine similarity is 1, then the angle between x and y is 0°
and in this case, x and y are the same except for magnitude.
• On the other hand, if cosine similarity is 0, then the angle between
x and y is 90° and they do not share any terms.
• Considering, this cosine similarity can be written equivalently
cos 𝑥, 𝑦 =
𝑥 ⋅ 𝑦
𝑥 ⋅ 𝑦
=
𝑥
𝑥
⋅
𝑦
𝑦
= 𝑥 ⋅ 𝑦
where 𝑥 =
𝑥
𝑥
and 𝑦 =
𝑦
𝑦
. This means that cosine similarity does
not take the magnitude of the two vectors into account, when
computing similarity.
• It is thus, one way normalized measurement.
Non-Metric Similarity
43
Example 12.7: Cosine Similarity
Suppose, we are given two documents with count of 10 words in each are shown in the
form of vectors x and y as below.
x = [3, 2, 0, 5, 0, 0, 0, 2, 0, 0] and y = [1, 0, 0, 0, 0, 0, 0, 1, 0, 2]
Thus, 𝑥 ⋅ 𝑦 = 3*1 + 2*0 + 0*0 + 5*0 + 0*0 + 0*0 + 0*0 + 2*1 + 0*0 + 0*2
= 5
𝑥 = 32 + 22 + 0 + 52 + 0 + 0 + 0 + 22 + 0 + 0 = 6.48
𝑦 = 12 + 0 + 0 + 0 + 0 + 0 + 0 + 12 + 0 + 22 = 2.24
∴ cos 𝑥, 𝑦 = 0.31
Extended Jaccard Coefficient
The extended Jaccard coefficient is denoted as 𝐸𝐽 and defined as
𝐸𝐽 =
𝑥 ⋅ 𝑦
‖x‖2 ⋅ 𝑦 2 − 𝑥 ⋅ 𝑦
• This is also alternatively termed as Tanimoto coefficient and can be used to measure
like document similarity.
Compute Extended Jaccard coefficient (𝐸𝐽) for the above example 12.7.
Pearson’s Correlation
44
• The correlation between two objects x and y gives a measure of the linear relationship
between the attributes of the objects.
• More precisely, Pearson’s correlation coefficient between two objects x and y is defined in
the following.
𝑃 𝑥, 𝑦 =
𝑆𝑥𝑦
𝑆𝑥 ⋅ 𝑆𝑦
where 𝑆𝑥𝑦 = 𝑐𝑜𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 𝑥, 𝑦 =
1
𝑛−1 𝑖=1
𝑛
𝑥𝑖 − 𝑥 𝑦𝑖 − 𝑦
𝑆𝑥 = 𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 𝑥 =
1
𝑛−1 𝑖=1
𝑛
𝑥𝑖 − 𝑥 2
𝑆𝑦 = 𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 𝑦 =
1
𝑛−1 𝑖=1
𝑛
𝑦𝑖 − 𝑦 2
𝑥 = 𝑚𝑒𝑎𝑛 𝑥 =
1
𝑛 𝑖=1
𝑛
𝑥𝑖
𝑦 = 𝑚𝑒𝑎𝑛 𝑦 =
1
𝑛 𝑖=1
𝑛
𝑦𝑖
and n is the number of attributes in x and y.
45
Note 1: Correlation is always in the range of -1 to 1. A correlation of 1(-1) means that x and y
have a perfect positive (negative) linear relationship, that is, 𝑥𝑖 = 𝑎 ⋅ 𝑦𝑖 + 𝑏 for some and b.
Example 12.8: Pearson’s correlation
Calculate the Pearson's correlation of the two vectors x and y as given below.
x = [3, 6, 0, 3, 6]
y = [1, 2, 0, 1, 2]
Note: Vector components can be negative values as well.
Note:
If the correlation is 0, then there is no linear relationship between the attribute of the object.
Example 12.9: Non-linear correlation
Verify that there is no linear relationship among attributes in the objects x and y given below.
x = [-3, -2, -1, 0, 1, 2, 3]
y = [9, 4, 1, 0, 1, 4, 9]
P (x, y) = 0, and also note 𝑥𝑖 = 𝑦𝑖
2
for all attributes here.
Mahalanobis Distance
46
• A related issue with distance measurement is how to handle the situation when attributes do
not have the same range of values.
• For example, a record with two objects Age and Income. Here, two attributes have different
scales. Thus, Euclidean distance is not a suitable measure to handle such situation.
• In the other way around, how to compute distance when there is correlation between some
of the attributes, perhaps, in addition to difference in the ranges of values.
• A generalization of Euclidean distance, the mahalanobi’s distance is useful when attributes
are (partially) correlated and/or have different range of values.
• The Mahalanobi’s distance between two objects (vectors) x and y is defined as
𝑀(𝑥, 𝑦) = (𝑥 − 𝑦) −1
𝑥 − 𝑦 𝑇
Here, −1
is inverse if the covariance matrix.
Set Difference and Time Difference
47
Set Difference
• Another non-metric dissimilarity measurement is set difference.
• Given two sets A and B, A – B is the set of elements of A that are not in B. Thus, if
A = {1, 2, 3, 4} and B = {2, 3, 4} then A – B = {1} and B – A = ∅.
• We can define d between two sets as d(A, B) as
𝑑(𝐴, 𝐵) = |𝐴 − 𝐵|
where 𝐴 denotes the size of set A.
Note:
This measure does not satisfy the property of Non-negativity, Symmetric and Transitivity.
• Another modified definition however satisfies
𝑑 𝐴, 𝐵 = 𝐴 − 𝐵 + |𝐵 − 𝐴|
Time Difference
• It defines the distance between times of the day as follows
𝑑 𝑡1, 𝑡2 =
𝑡2 − 𝑡1 𝑖𝑓 𝑡1 ≤ 𝑡2
24 + (𝑡2−𝑡1) 𝑖𝑓 𝑡1 ≥ 𝑡2
• Example: 𝑑 1 𝑝𝑚, 2 𝑝𝑚 = 1 ℎ𝑜𝑢𝑟
𝑑 2 𝑝𝑚, 1 𝑝𝑚 = 23 ℎ𝑜𝑢𝑟𝑠.

More Related Content

What's hot

Principal component analysis and lda
Principal component analysis and ldaPrincipal component analysis and lda
Principal component analysis and lda
Suresh Pokharel
 
AI Lecture 7 (uncertainty)
AI Lecture 7 (uncertainty)AI Lecture 7 (uncertainty)
AI Lecture 7 (uncertainty)
Tajim Md. Niamat Ullah Akhund
 
Fuzzy Clustering(C-means, K-means)
Fuzzy Clustering(C-means, K-means)Fuzzy Clustering(C-means, K-means)
Fuzzy Clustering(C-means, K-means)
Fellowship at Vodafone FutureLab
 
Classification Based Machine Learning Algorithms
Classification Based Machine Learning AlgorithmsClassification Based Machine Learning Algorithms
Classification Based Machine Learning Algorithms
Md. Main Uddin Rony
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
Gajanand Sharma
 
Data preprocessing in Machine learning
Data preprocessing in Machine learning Data preprocessing in Machine learning
Data preprocessing in Machine learning
pyingkodi maran
 
Decision tree in artificial intelligence
Decision tree in artificial intelligenceDecision tree in artificial intelligence
Decision tree in artificial intelligence
MdAlAmin187
 
Fuzzy Membership Function
Fuzzy Membership Function Fuzzy Membership Function
Introduction to Machine Learning with Find-S
Introduction to Machine Learning with Find-SIntroduction to Machine Learning with Find-S
Introduction to Machine Learning with Find-S
Knoldus Inc.
 
Knowledge representation and Predicate logic
Knowledge representation and Predicate logicKnowledge representation and Predicate logic
Knowledge representation and Predicate logic
Amey Kerkar
 
Bayes Classification
Bayes ClassificationBayes Classification
Bayes Classification
sathish sak
 
Principal Component Analysis
Principal Component AnalysisPrincipal Component Analysis
Principal Component Analysis
Ricardo Wendell Rodrigues da Silveira
 
Machine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksMachine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural Networks
Francesco Collova'
 
Decision trees in Machine Learning
Decision trees in Machine Learning Decision trees in Machine Learning
Decision trees in Machine Learning
Mohammad Junaid Khan
 
2.4 rule based classification
2.4 rule based classification2.4 rule based classification
2.4 rule based classification
Krish_ver2
 
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; KamberChapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
error007
 
Chapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
Chapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; KamberChapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
Chapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
error007
 
Knowledge representation in AI
Knowledge representation in AIKnowledge representation in AI
Knowledge representation in AI
Vishal Singh
 
Inductive bias
Inductive biasInductive bias
Inductive bias
swapnac12
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
ankur bhalla
 

What's hot (20)

Principal component analysis and lda
Principal component analysis and ldaPrincipal component analysis and lda
Principal component analysis and lda
 
AI Lecture 7 (uncertainty)
AI Lecture 7 (uncertainty)AI Lecture 7 (uncertainty)
AI Lecture 7 (uncertainty)
 
Fuzzy Clustering(C-means, K-means)
Fuzzy Clustering(C-means, K-means)Fuzzy Clustering(C-means, K-means)
Fuzzy Clustering(C-means, K-means)
 
Classification Based Machine Learning Algorithms
Classification Based Machine Learning AlgorithmsClassification Based Machine Learning Algorithms
Classification Based Machine Learning Algorithms
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Data preprocessing in Machine learning
Data preprocessing in Machine learning Data preprocessing in Machine learning
Data preprocessing in Machine learning
 
Decision tree in artificial intelligence
Decision tree in artificial intelligenceDecision tree in artificial intelligence
Decision tree in artificial intelligence
 
Fuzzy Membership Function
Fuzzy Membership Function Fuzzy Membership Function
Fuzzy Membership Function
 
Introduction to Machine Learning with Find-S
Introduction to Machine Learning with Find-SIntroduction to Machine Learning with Find-S
Introduction to Machine Learning with Find-S
 
Knowledge representation and Predicate logic
Knowledge representation and Predicate logicKnowledge representation and Predicate logic
Knowledge representation and Predicate logic
 
Bayes Classification
Bayes ClassificationBayes Classification
Bayes Classification
 
Principal Component Analysis
Principal Component AnalysisPrincipal Component Analysis
Principal Component Analysis
 
Machine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural NetworksMachine Learning: Introduction to Neural Networks
Machine Learning: Introduction to Neural Networks
 
Decision trees in Machine Learning
Decision trees in Machine Learning Decision trees in Machine Learning
Decision trees in Machine Learning
 
2.4 rule based classification
2.4 rule based classification2.4 rule based classification
2.4 rule based classification
 
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; KamberChapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
Chapter - 6 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
 
Chapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
Chapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; KamberChapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
Chapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
 
Knowledge representation in AI
Knowledge representation in AIKnowledge representation in AI
Knowledge representation in AI
 
Inductive bias
Inductive biasInductive bias
Inductive bias
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 

Similar to Similarity Measures (pptx)

Data mining
Data miningData mining
Data mining
Kani Selvam
 
20IT501_DWDM_PPT_Unit_IV.ppt
20IT501_DWDM_PPT_Unit_IV.ppt20IT501_DWDM_PPT_Unit_IV.ppt
20IT501_DWDM_PPT_Unit_IV.ppt
SamPrem3
 
20IT501_DWDM_PPT_Unit_IV.ppt
20IT501_DWDM_PPT_Unit_IV.ppt20IT501_DWDM_PPT_Unit_IV.ppt
20IT501_DWDM_PPT_Unit_IV.ppt
PalaniKumarR2
 
[PPT]
[PPT][PPT]
[PPT]
butest
 
Clustering
ClusteringClustering
Clustering
NLPseminar
 
47 292-298
47 292-29847 292-298
47 292-298
idescitation
 
BayesianClassifierAndConditionalProbability.pptx
BayesianClassifierAndConditionalProbability.pptxBayesianClassifierAndConditionalProbability.pptx
BayesianClassifierAndConditionalProbability.pptx
Nishant83346
 
Chapter1.pdf this is the first chapter of the book, will share
Chapter1.pdf this is the first chapter of the book, will shareChapter1.pdf this is the first chapter of the book, will share
Chapter1.pdf this is the first chapter of the book, will share
ssuser5e86d2
 
IJCSI-10-6-1-288-292
IJCSI-10-6-1-288-292IJCSI-10-6-1-288-292
IJCSI-10-6-1-288-292
HARDIK SINGH
 
Application for Logical Expression Processing
Application for Logical Expression Processing Application for Logical Expression Processing
Application for Logical Expression Processing
csandit
 
Clustering in artificial intelligence
Clustering in artificial intelligence Clustering in artificial intelligence
Clustering in artificial intelligence
Karam Munir Butt
 
Single Reduct Generation Based on Relative Indiscernibility of Rough Set Theo...
Single Reduct Generation Based on Relative Indiscernibility of Rough Set Theo...Single Reduct Generation Based on Relative Indiscernibility of Rough Set Theo...
Single Reduct Generation Based on Relative Indiscernibility of Rough Set Theo...
ijsc
 
Cluster analysis
Cluster analysisCluster analysis
Cluster analysis
Avijit Famous
 
Google BigQuery is a very popular enterprise warehouse that’s built with a co...
Google BigQuery is a very popular enterprise warehouse that’s built with a co...Google BigQuery is a very popular enterprise warehouse that’s built with a co...
Google BigQuery is a very popular enterprise warehouse that’s built with a co...
Abebe Admasu
 
Supervised and unsupervised learning
Supervised and unsupervised learningSupervised and unsupervised learning
Supervised and unsupervised learning
AmAn Singh
 
A Study in Employing Rough Set Based Approach for Clustering on Categorical ...
A Study in Employing Rough Set Based Approach for Clustering  on Categorical ...A Study in Employing Rough Set Based Approach for Clustering  on Categorical ...
A Study in Employing Rough Set Based Approach for Clustering on Categorical ...
IOSR Journals
 
G0354451
G0354451G0354451
G0354451
iosrjournals
 
Localization, Classification, and Evaluation.pdf
Localization, Classification, and Evaluation.pdfLocalization, Classification, and Evaluation.pdf
Localization, Classification, and Evaluation.pdf
SSN College of Engineering, Kalavakkam
 
A Novel Algorithm for Design Tree Classification with PCA
A Novel Algorithm for Design Tree Classification with PCAA Novel Algorithm for Design Tree Classification with PCA
A Novel Algorithm for Design Tree Classification with PCA
Editor Jacotech
 
1376846406 14447221
1376846406  144472211376846406  14447221
1376846406 14447221
Editor Jacotech
 

Similar to Similarity Measures (pptx) (20)

Data mining
Data miningData mining
Data mining
 
20IT501_DWDM_PPT_Unit_IV.ppt
20IT501_DWDM_PPT_Unit_IV.ppt20IT501_DWDM_PPT_Unit_IV.ppt
20IT501_DWDM_PPT_Unit_IV.ppt
 
20IT501_DWDM_PPT_Unit_IV.ppt
20IT501_DWDM_PPT_Unit_IV.ppt20IT501_DWDM_PPT_Unit_IV.ppt
20IT501_DWDM_PPT_Unit_IV.ppt
 
[PPT]
[PPT][PPT]
[PPT]
 
Clustering
ClusteringClustering
Clustering
 
47 292-298
47 292-29847 292-298
47 292-298
 
BayesianClassifierAndConditionalProbability.pptx
BayesianClassifierAndConditionalProbability.pptxBayesianClassifierAndConditionalProbability.pptx
BayesianClassifierAndConditionalProbability.pptx
 
Chapter1.pdf this is the first chapter of the book, will share
Chapter1.pdf this is the first chapter of the book, will shareChapter1.pdf this is the first chapter of the book, will share
Chapter1.pdf this is the first chapter of the book, will share
 
IJCSI-10-6-1-288-292
IJCSI-10-6-1-288-292IJCSI-10-6-1-288-292
IJCSI-10-6-1-288-292
 
Application for Logical Expression Processing
Application for Logical Expression Processing Application for Logical Expression Processing
Application for Logical Expression Processing
 
Clustering in artificial intelligence
Clustering in artificial intelligence Clustering in artificial intelligence
Clustering in artificial intelligence
 
Single Reduct Generation Based on Relative Indiscernibility of Rough Set Theo...
Single Reduct Generation Based on Relative Indiscernibility of Rough Set Theo...Single Reduct Generation Based on Relative Indiscernibility of Rough Set Theo...
Single Reduct Generation Based on Relative Indiscernibility of Rough Set Theo...
 
Cluster analysis
Cluster analysisCluster analysis
Cluster analysis
 
Google BigQuery is a very popular enterprise warehouse that’s built with a co...
Google BigQuery is a very popular enterprise warehouse that’s built with a co...Google BigQuery is a very popular enterprise warehouse that’s built with a co...
Google BigQuery is a very popular enterprise warehouse that’s built with a co...
 
Supervised and unsupervised learning
Supervised and unsupervised learningSupervised and unsupervised learning
Supervised and unsupervised learning
 
A Study in Employing Rough Set Based Approach for Clustering on Categorical ...
A Study in Employing Rough Set Based Approach for Clustering  on Categorical ...A Study in Employing Rough Set Based Approach for Clustering  on Categorical ...
A Study in Employing Rough Set Based Approach for Clustering on Categorical ...
 
G0354451
G0354451G0354451
G0354451
 
Localization, Classification, and Evaluation.pdf
Localization, Classification, and Evaluation.pdfLocalization, Classification, and Evaluation.pdf
Localization, Classification, and Evaluation.pdf
 
A Novel Algorithm for Design Tree Classification with PCA
A Novel Algorithm for Design Tree Classification with PCAA Novel Algorithm for Design Tree Classification with PCA
A Novel Algorithm for Design Tree Classification with PCA
 
1376846406 14447221
1376846406  144472211376846406  14447221
1376846406 14447221
 

Recently uploaded

SD_Instructional-Design-Frameworkzz.pptx
SD_Instructional-Design-Frameworkzz.pptxSD_Instructional-Design-Frameworkzz.pptx
SD_Instructional-Design-Frameworkzz.pptx
MarkKennethBellen1
 
Celebrating 25th Year SATURDAY, 27th JULY, 2024
Celebrating 25th Year SATURDAY, 27th JULY, 2024Celebrating 25th Year SATURDAY, 27th JULY, 2024
Celebrating 25th Year SATURDAY, 27th JULY, 2024
APEC Melmaruvathur
 
english 9 Quarter 1 Week 1 Modals and its Uses
english 9 Quarter 1 Week 1 Modals and its Usesenglish 9 Quarter 1 Week 1 Modals and its Uses
english 9 Quarter 1 Week 1 Modals and its Uses
EjNoveno
 
PPT Jessica powerpoint physical geography
PPT Jessica powerpoint physical geographyPPT Jessica powerpoint physical geography
PPT Jessica powerpoint physical geography
np2fjc9csm
 
Email Marketing in Odoo 17 - Odoo 17 Slides
Email Marketing  in Odoo 17 - Odoo 17 SlidesEmail Marketing  in Odoo 17 - Odoo 17 Slides
Email Marketing in Odoo 17 - Odoo 17 Slides
Celine George
 
How to install python packages from Pycharm
How to install python packages from PycharmHow to install python packages from Pycharm
How to install python packages from Pycharm
Celine George
 
Q1_LE_Music and Arts 7_Lesson 1_Weeks 1-2.docx
Q1_LE_Music and Arts 7_Lesson 1_Weeks 1-2.docxQ1_LE_Music and Arts 7_Lesson 1_Weeks 1-2.docx
Q1_LE_Music and Arts 7_Lesson 1_Weeks 1-2.docx
ElynTGomez
 
How to Configure Field Cleaning Rules in Odoo 17
How to Configure Field Cleaning Rules in Odoo 17How to Configure Field Cleaning Rules in Odoo 17
How to Configure Field Cleaning Rules in Odoo 17
Celine George
 
Float Operations in Odoo 17 - Odoo 17 Slides
Float Operations in Odoo 17 - Odoo 17 SlidesFloat Operations in Odoo 17 - Odoo 17 Slides
Float Operations in Odoo 17 - Odoo 17 Slides
Celine George
 
How to Load Custom Field to POS in Odoo 17 - Odoo 17 Slides
How to Load Custom Field to POS in Odoo 17 - Odoo 17 SlidesHow to Load Custom Field to POS in Odoo 17 - Odoo 17 Slides
How to Load Custom Field to POS in Odoo 17 - Odoo 17 Slides
Celine George
 
ACTION PLAN ON NUTRITION MONTH 2024.docx
ACTION PLAN ON NUTRITION MONTH 2024.docxACTION PLAN ON NUTRITION MONTH 2024.docx
ACTION PLAN ON NUTRITION MONTH 2024.docx
LeviMaePacatang1
 
How to Set Start Category in Odoo 17 POS
How to Set Start Category in Odoo 17 POSHow to Set Start Category in Odoo 17 POS
How to Set Start Category in Odoo 17 POS
Celine George
 
great athletes ppt bahasa inggris kelas x kurikulum merdeka
great athletes ppt bahasa inggris kelas x kurikulum merdekagreat athletes ppt bahasa inggris kelas x kurikulum merdeka
great athletes ppt bahasa inggris kelas x kurikulum merdeka
MonicaWijaya13
 
Brigada eskwela 2024 sample template NARRATIVE REPORT.docx
Brigada eskwela 2024 sample template NARRATIVE REPORT.docxBrigada eskwela 2024 sample template NARRATIVE REPORT.docx
Brigada eskwela 2024 sample template NARRATIVE REPORT.docx
BerlynFamilaran1
 
principles of auditing types of audit ppt
principles of auditing types of audit pptprinciples of auditing types of audit ppt
principles of auditing types of audit ppt
sangeetha280806
 
How to Use Quality Module in Odoo 17 - Odoo 17 Slides
How to Use Quality Module in Odoo 17 - Odoo 17 SlidesHow to Use Quality Module in Odoo 17 - Odoo 17 Slides
How to Use Quality Module in Odoo 17 - Odoo 17 Slides
Celine George
 
Module 5 Bone, Joints & Muscle Injuries.ppt
Module 5 Bone, Joints & Muscle Injuries.pptModule 5 Bone, Joints & Muscle Injuries.ppt
Module 5 Bone, Joints & Muscle Injuries.ppt
KIPAIZAGABAWA1
 
Class-Orientation for school year 2024 - 2025
Class-Orientation for school year 2024 - 2025Class-Orientation for school year 2024 - 2025
Class-Orientation for school year 2024 - 2025
KIPAIZAGABAWA1
 
What is the Use of API.onchange in Odoo 17
What is the Use of API.onchange in Odoo 17What is the Use of API.onchange in Odoo 17
What is the Use of API.onchange in Odoo 17
Celine George
 
How to Configure Extra Steps During Checkout in Odoo 17 Website App
How to Configure Extra Steps During Checkout in Odoo 17 Website AppHow to Configure Extra Steps During Checkout in Odoo 17 Website App
How to Configure Extra Steps During Checkout in Odoo 17 Website App
Celine George
 

Recently uploaded (20)

SD_Instructional-Design-Frameworkzz.pptx
SD_Instructional-Design-Frameworkzz.pptxSD_Instructional-Design-Frameworkzz.pptx
SD_Instructional-Design-Frameworkzz.pptx
 
Celebrating 25th Year SATURDAY, 27th JULY, 2024
Celebrating 25th Year SATURDAY, 27th JULY, 2024Celebrating 25th Year SATURDAY, 27th JULY, 2024
Celebrating 25th Year SATURDAY, 27th JULY, 2024
 
english 9 Quarter 1 Week 1 Modals and its Uses
english 9 Quarter 1 Week 1 Modals and its Usesenglish 9 Quarter 1 Week 1 Modals and its Uses
english 9 Quarter 1 Week 1 Modals and its Uses
 
PPT Jessica powerpoint physical geography
PPT Jessica powerpoint physical geographyPPT Jessica powerpoint physical geography
PPT Jessica powerpoint physical geography
 
Email Marketing in Odoo 17 - Odoo 17 Slides
Email Marketing  in Odoo 17 - Odoo 17 SlidesEmail Marketing  in Odoo 17 - Odoo 17 Slides
Email Marketing in Odoo 17 - Odoo 17 Slides
 
How to install python packages from Pycharm
How to install python packages from PycharmHow to install python packages from Pycharm
How to install python packages from Pycharm
 
Q1_LE_Music and Arts 7_Lesson 1_Weeks 1-2.docx
Q1_LE_Music and Arts 7_Lesson 1_Weeks 1-2.docxQ1_LE_Music and Arts 7_Lesson 1_Weeks 1-2.docx
Q1_LE_Music and Arts 7_Lesson 1_Weeks 1-2.docx
 
How to Configure Field Cleaning Rules in Odoo 17
How to Configure Field Cleaning Rules in Odoo 17How to Configure Field Cleaning Rules in Odoo 17
How to Configure Field Cleaning Rules in Odoo 17
 
Float Operations in Odoo 17 - Odoo 17 Slides
Float Operations in Odoo 17 - Odoo 17 SlidesFloat Operations in Odoo 17 - Odoo 17 Slides
Float Operations in Odoo 17 - Odoo 17 Slides
 
How to Load Custom Field to POS in Odoo 17 - Odoo 17 Slides
How to Load Custom Field to POS in Odoo 17 - Odoo 17 SlidesHow to Load Custom Field to POS in Odoo 17 - Odoo 17 Slides
How to Load Custom Field to POS in Odoo 17 - Odoo 17 Slides
 
ACTION PLAN ON NUTRITION MONTH 2024.docx
ACTION PLAN ON NUTRITION MONTH 2024.docxACTION PLAN ON NUTRITION MONTH 2024.docx
ACTION PLAN ON NUTRITION MONTH 2024.docx
 
How to Set Start Category in Odoo 17 POS
How to Set Start Category in Odoo 17 POSHow to Set Start Category in Odoo 17 POS
How to Set Start Category in Odoo 17 POS
 
great athletes ppt bahasa inggris kelas x kurikulum merdeka
great athletes ppt bahasa inggris kelas x kurikulum merdekagreat athletes ppt bahasa inggris kelas x kurikulum merdeka
great athletes ppt bahasa inggris kelas x kurikulum merdeka
 
Brigada eskwela 2024 sample template NARRATIVE REPORT.docx
Brigada eskwela 2024 sample template NARRATIVE REPORT.docxBrigada eskwela 2024 sample template NARRATIVE REPORT.docx
Brigada eskwela 2024 sample template NARRATIVE REPORT.docx
 
principles of auditing types of audit ppt
principles of auditing types of audit pptprinciples of auditing types of audit ppt
principles of auditing types of audit ppt
 
How to Use Quality Module in Odoo 17 - Odoo 17 Slides
How to Use Quality Module in Odoo 17 - Odoo 17 SlidesHow to Use Quality Module in Odoo 17 - Odoo 17 Slides
How to Use Quality Module in Odoo 17 - Odoo 17 Slides
 
Module 5 Bone, Joints & Muscle Injuries.ppt
Module 5 Bone, Joints & Muscle Injuries.pptModule 5 Bone, Joints & Muscle Injuries.ppt
Module 5 Bone, Joints & Muscle Injuries.ppt
 
Class-Orientation for school year 2024 - 2025
Class-Orientation for school year 2024 - 2025Class-Orientation for school year 2024 - 2025
Class-Orientation for school year 2024 - 2025
 
What is the Use of API.onchange in Odoo 17
What is the Use of API.onchange in Odoo 17What is the Use of API.onchange in Odoo 17
What is the Use of API.onchange in Odoo 17
 
How to Configure Extra Steps During Checkout in Odoo 17 Website App
How to Configure Extra Steps During Checkout in Odoo 17 Website AppHow to Configure Extra Steps During Checkout in Odoo 17 Website App
How to Configure Extra Steps During Checkout in Odoo 17 Website App
 

Similarity Measures (pptx)

  • 2. Topics to be covered…  Introduction to clustering  Similarity and dissimilarity measures  Clustering techniques  Partitioning algorithms  Hierarchical algorithms  Density-based algorithm 2
  • 3. Introduction to Clustering 3  Classification consists of assigning a class label to a set of unclassified cases.  Supervised Classification  The set of possible classes is known in advance.  Unsupervised Classification  Set of possible classes is not known. After classification we can try to assign a name to that class.  Unsupervised classification is called clustering.
  • 6. Introduction to Clustering  Clustering is somewhat related to classification in the sense that in both cases data are grouped. •  However, there is a major difference between these two techniques.  In order to understand the difference between the two, consider a sample dataset containing marks obtained by a set of students and corresponding grades as shown in Table 15.1. 6
  • 7. Introduction to Clustering Table 12.1: Tabulation of Marks 7 Roll No Mark Grade 1 80 A 2 70 A 3 55 C 4 91 EX 5 65 B 6 35 D 7 76 A 8 40 D 9 50 C 10 85 EX 11 25 F 12 60 B 13 45 D 14 95 EX 15 63 B 16 88 A Figure 12.1: Group representation of dataset in Table 15.1 15 12 5 11 14 10 4 6 13 8 16 7 1 2 3 9 B F EX D C A
  • 8. Introduction to Clustering  It is evident that there is a simple mapping between Table 12.1 and Fig 12.1.  The fact is that groups in Fig 12.1 are already predefined in Table 12.1. This is similar to classification, where we have given a dataset where groups of data are predefined.  Consider another situation, where ‘Grade’ is not known, but we have to make a grouping.  Put all the marks into a group if any other mark in that group does not exceed by 5 or more.  This is similar to “Relative grading” concept and grade may range from A to Z. 8
  • 9. Introduction to Clustering  Figure 12.2 shows another grouping by means of another simple mapping, but the difference is this mapping does not based on predefined classes.  In other words, this grouping is accomplished by finding similarities between data according to characteristics found in the actual data.  Such a group making is called clustering.
  • 10. Introduction to Clustering Example 12.1 : The task of clustering In order to elaborate the clustering task, consider the following dataset. Table 12.2: Life Insurance database With certain similarity or likeliness defined, we can classify the records to one or group of more attributes (and thus mapping being non-trivial). 10 Martial Status Age Income Education Number of children Single 35 25000 Under Graduate 3 Married 25 15000 Graduate 1 Single 40 20000 Under Graduate 0 Divorced 20 30000 Post-Graduate 0 Divorced 25 20000 Under Graduate 3 Married 60 70000 Graduate 0 Married 30 90000 Post-Graduate 0 Married 45 60000 Graduate 5 Divorced 50 80000 Under Graduate 2
  • 11.  Clustering has been used in many application domains:  Image analysis  Document retrieval  Machine learning, etc.  When clustering is applied to real-world database, many problems may arise. 1. The (best) number of cluster is not known.  There is not correct answer to a clustering problem.  In fact, many answers may be found.  The exact number of cluster required is not easy to determine. 11 Introduction to Clustering
  • 12. 2. There may not be any a priori knowledge concerning the clusters. • This is an issue that what data should be used for clustering. • Unlike classification, in clustering, we have not supervisory learning to aid the process. • Clustering can be viewed as similar to unsupervised learning. 3. Interpreting the semantic meaning of each cluster may be difficult. • With classification, the labeling of classes is known ahead of time. In contrast, with clustering, this may not be the case. • Thus, when the clustering process is finished yielding a set of clusters, the exact meaning of each cluster may not be obvious. 12 Introduction to Clustering
  • 13. Definition of Clustering Problem 13 Given a database D = 𝑡1, 𝑡2, … . . , 𝑡𝑛 of 𝑛 tuples, the clustering problem is to define a mapping 𝑓 ∶ D 𝐶, where each 𝑡𝑖 ∈ 𝐷 is assigned to one cluster 𝑐𝑖 ∈ 𝐶. Here, C = 𝑐1, 𝑐2, … . . , 𝑐𝑘 denotes a set of clusters. Definition 12.1: Clustering • Solution to a clustering problem is devising a mapping formulation. • The formulation behind such a mapping is to establish that a tuple within one cluster is more like tuples within that cluster and not similar to tuples outside it.
  • 14. Definition of Clustering Problem 14 • Hence, mapping function f in Definition 12.1 may be explicitly stated as 𝑓 ∶ D 𝑐1, 𝑐2, … . . , 𝑐𝑘 where i) each 𝑡𝑖 ∈ 𝐷 is assigned to one cluster 𝑐𝑖 ∈ 𝐶. ii) for each cluster 𝑐𝑖 ∈ 𝐶, and for all 𝑡𝑖𝑝 , 𝑡𝑖𝑞 ∈ 𝑐𝑖 and there exist 𝑡𝑗 ∉ 𝑐𝑖 such that similarity (𝑡𝑖𝑝 , 𝑡𝑖𝑞 ) > similarity (𝑡𝑖𝑝 , 𝑡𝑗 ) AND similarity (𝑡𝑖𝑞 , 𝑡𝑗 ) • In the field of cluster analysis, this similarity plays an important part. • Now, we shall learn how similarity (this is also alternatively judged as “dissimilarity”) between any two data can be measured.
  • 16. Similarity and Dissimilarity Measures 16 • In clustering techniques, similarity (or dissimilarity) is an important measurement. • Informally, similarity between two objects (e.g., two images, two documents, two records, etc.) is a numerical measure of the degree to which two objects are alike. • The dissimilarity on the other hand, is another alternative (or opposite) measure of the degree to which two objects are different. • Both similarity and dissimilarity also termed as proximity. • Usually, similarity and dissimilarity are non-negative numbers and may range from zero (highly dissimilar (no similar)) to some finite/infinite value (highly similar (no dissimilar)). Note: • Frequently, the term distance is used as a synonym for dissimilarity • In fact, it is used to refer as a special case of dissimilarity.
  • 17. Proximity Measures: Single-Attribute 17 • Consider an object, which is defined by a single attribute A (e.g., length) and the attribute A has n-distinct values 𝑎1, 𝑎2, … . . , 𝑎𝑛. • A data structure called “Dissimilarity matrix” is used to store a collection of proximities that are available for all pair of n attribute values. • In other words, the Dissimilarity matrix for an attribute A with n values is represented by an 𝑛 × 𝑛 matrix as shown below. 0 𝑝(2,1) 0 𝑝(3,1) ⋮ 𝑝(𝑛,1) 𝑝(3,2) ⋮ 𝑝(𝑛,2) 0 ⋮ … … 0 𝑛×𝑛 • Here, 𝑝(𝑖,𝑗) denotes the proximity measure between two objects with attribute values 𝑎𝑖 and 𝑎𝑗. • Note: The proximity measure is symmetric, that is, 𝑝(𝑖,𝑗) = 𝑝(𝑗,𝑖)
  • 18. Proximity Calculation  Proximity calculation to compute 𝑝(𝑖,𝑗) is different for different types of attributes according to NOIR topology. Proximity calculation for Nominal attributes: • For example, binary attribute, Gender = {Male, female} where Male is equivalent to binary 1 and female is equivalent to binary 0. • Similarity value is 1 if the two objects contains the same attribute value, while similarity value is 0 implies objects are not at all similar. 18 Object Gender Ram Male Sita Female Laxman Male • Here, Similarity value let it be denoted by 𝑝, among different objects are as follows. 𝑝 𝑅𝑎𝑚, 𝑠𝑖𝑡𝑎 = 0 𝑝 𝑅𝑎𝑚, 𝐿𝑎𝑥𝑚𝑎𝑛 = 1 Note : In this case, if 𝑞 denotes the dissimilarity between two objects 𝑖 𝑎𝑛𝑑 𝑗 with single binary attributes, then 𝑞(𝑖,𝑗)= 1 − 𝑝(𝑖,𝑗)
  • 19. Proximity Calculation 19 • Now, let us focus on how to calculate proximity measures between objects which are defined by two or more binary attributes. • Suppose, the number of attributes be 𝑏. We can define the contingency table summarizing the different matches and mismatches between any two objects 𝑥 and 𝑦, which are as follows. Object 𝑥 Object y 1 0 1 𝑓11 𝑓10 0 𝑓01 𝑓00 Table 12.3: Contingency table with binary attributes Here, 𝑓11= the number of attributes where 𝑥=1 and 𝑦=1. 𝑓10= the number of attributes where 𝑥=1 and 𝑦=0. 𝑓01= the number of attributes where 𝑥=0 and 𝑦=1. 𝑓00= the number of attributes where 𝑥=0 and 𝑦=0. Note : 𝑓00 + 𝑓01 + 𝑓10 + 𝑓11 = 𝑏, the total number of binary attributes. Now, two cases may arise: symmetric and asymmetric binary attributes.
  • 20. Similarity Measure with Symmetric Binary 20 • To measure the similarity between two objects defined by symmetric binary attributes using a measure called symmetric binary coefficient and denoted as 𝒮 and defined below 𝒮 = 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑎𝑡𝑐ℎ𝑖𝑛𝑔 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒 𝑣𝑎𝑙𝑢𝑒𝑠 𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑠 or 𝒮 = 𝑓00 + 𝑓11 𝑓00 + 𝑓01 + 𝑓10 + 𝑓11 The dissimilarity measure, likewise can be denoted as 𝒟 and defined as 𝒟 = 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑖𝑠𝑚𝑎𝑡𝑐ℎ𝑒𝑑 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒 𝑣𝑎𝑙𝑢𝑒𝑠 𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑠 or 𝒟 = 𝑓01 + 𝑓10 𝑓00 + 𝑓01 + 𝑓10 + 𝑓11 Note that, 𝒟 = 1 − 𝒮
  • 21. Similarity Measure with Symmetric Binary 21 Example 12.2: Proximity measures with symmetric binary attributes Consider the following two dataset, where objects are defined with symmetric binary attributes. Gender = {M, F}, Food = {V, N}, Caste = {H, M}, Education = {L, I}, Hobby = {T, C}, Job = {Y, N} Object Gender Food Caste Education Hobby Job Hari M V M L C N Ram M N M I T N Tomi F N H L C Y 𝒮(Hari, Ram) = 1+2 1+2+1+2 = 0.5
  • 22. Proximity Measure with Asymmetric Binary 22 • Such a similarity measure between two objects defined by asymmetric binary attributes is done by Jaccard Coefficient and which is often symbolized by 𝒥 is given by the following equation 𝒥= 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑎𝑡𝑐ℎ𝑖𝑛𝑔 𝑝𝑟𝑒𝑠𝑒𝑛𝑐𝑒 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑠 𝑛𝑜𝑡 𝑖𝑛𝑣𝑜𝑙𝑣𝑒𝑑 𝑖𝑛 00 𝑚𝑎𝑡𝑐ℎ𝑖𝑛𝑔 or 𝒥 = 𝑓11 𝑓01 + 𝑓10 + 𝑓11
  • 23. 23 Example 12.3: Jaccard Coefficient Consider the following two dataset. Gender = {M, F}, Food = {V, N}, Caste = {H, M}, Education = {L, I}, Hobby = {T, C}, Job = {Y, N} Calculate the Jaccard coefficient between Ram and Hari assuming that all binary attributes are asymmetric and for each pair values for an attribute, first one is more frequent than the second. Object Gender Food Caste Education Hobby Job Hari M V M L C N Ram M N M I T N Tomi F N H L C Y 𝒥(Hari, Ram) = 1 2+1+1 = 0.25 Note: 𝒥(Ram, Tomi) = 0 and 𝒥(Hari, Ram) = 𝒥(Ram, Hari), etc. Proximity Measure with Asymmetric Binary
  • 24. 24 Example 12.4: Consider the following two dataset. Gender = {M, F}, Food = {V, N}, Caste = {H, M}, Education = {L, I}, Hobby = {T, C}, Job = {Y, N} Object Gender Food Caste Education Hobby Job Hari M V M L C N Ram M N M I T N Tomi F N H L C Y How you can calculate similarity if Gender, Hobby and Job are symmetric binary attributes and Food, Caste, Education are asymmetric binary attributes? Obtain the similarity matrix with Jaccard coefficient of objects for the above, e.g. ?
  • 25. 25 • Binary attribute is a special kind of nominal attribute where the attribute has values with two states only. • On the other hand, categorical attribute is another kind of nominal attribute where it has values with three or more states (e.g. color = {Red, Green, Blue}). • If 𝓈 𝑥, 𝑦 denotes the similarity between two objects 𝑥 𝑎𝑛𝑑 𝑦, then 𝓈 𝑥, 𝑦 = 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑎𝑡𝑐ℎ𝑒𝑠 𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑠 and the dissimilarity 𝒹(𝑥, 𝑦) is 𝒹(𝑥, 𝑦) = 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑚𝑖𝑠𝑚𝑎𝑡𝑐ℎ𝑒𝑠 𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑡𝑡𝑟𝑖𝑏𝑢𝑡𝑒𝑠 • If 𝑚 = number of matches and 𝑎 = number of categorical attributes with which objects are defined as 𝓈 𝑥, 𝑦 = 𝑚 𝑎 and 𝒹 𝑥, 𝑦 = 𝑎−𝑚 𝑎 Proximity Measure with Categorical Attribute
  • 26. 26 Example 12.4: Object Color Position Distance 1 R L L 2 B C M 3 G R M 4 R L H The similarity matrix considering only color attribute is shown below Dissimilarity matrix, 𝒹 =? Obtain the dissimilarity matrix considering both the categorical attributes (i.e. color and position). Proximity Measure with Categorical Attribute
  • 27. 27 • Ordinal attribute is a special kind of categorical attribute, where the values of attribute follows a sequence (ordering) e.g. Grade = {Ex, A, B, C} where Ex > A >B >C. • Suppose, A is an attribute of type ordinal and the set of values of 𝐴 = {𝑎1, 𝑎2, … . . , 𝑎𝑛}. Let 𝑛 values of 𝐴 are ordered in ascending order as 𝑎1< 𝑎2 <. . < 𝑎𝑛. Let i-th attribute value 𝑎𝑖 be ranked as i, i=1,2,..n. • The normalized value of 𝑎𝑖 can be expressed as 𝑎𝑖 = 𝑖 − 1 𝑛 − 1 • Thus, normalized values lie in the range [0. . 1]. • As 𝑎𝑖 is a numerical value, the similarity measure, then can be calculated using any similarity measurement method for numerical attribute. • For example, the similarity measure between two objects 𝑥 𝑎𝑛𝑑 𝑦 with attribute values 𝑎𝑖 and 𝑎𝑗, then can be expressed as 𝓈 𝑥, 𝑦 = (𝑎𝑖 − 𝑎𝑗)2 where 𝑎𝑖 and 𝑎𝑖 are the normalized values of 𝑎𝑖 and 𝑎𝑖 , respectively. Proximity Measure with Ordinal Attribute
  • 28. 28 Example 12.5: Consider the following set of records, where each record is defined by two ordinal attributes size={S, M, L} and Quality = {Ex, A, B, C} such that S<M<L and Ex>A>B>C. Object Size Quality A S (0.0) A (0.66) B L (1.0) Ex (1.0) C L (1.0) C (0.0) D M (0.5) B (0.33) • Normalized values are shown in brackets. • Their similarity measures are shown in the similarity matrix below. ? Find the dissimilarity matrix, when each object is defined by only one ordinal attribute say size (or quality). 𝐴 𝐵 𝐶 𝐷 0 0 0 0 0 0 0 0 0 0 𝐴 𝐵 𝐶 D Proximity Measure with Ordinal Attribute
  • 29. 29 • The measure called distance is usually referred to estimate the similarity between two objects defined with interval-scaled attributes. • We first present a generic formula to express distance d between two objects 𝑥 𝑎𝑛𝑑 𝑦 in n-dimensional space. Suppose, 𝑥𝑖 𝑎𝑛𝑑 𝑦𝑖 denote the values of ith attribute of the objects 𝑥 𝑎𝑛𝑑 𝑦 respectively. 𝑑 𝑥, 𝑦 = 𝑖=1 𝑛 𝑥𝑖 − 𝑦𝑖 𝑟 1 𝑟 • Here, 𝑟 is any integer value. • This distance metric most popularly known as Minkowski metric. • This distance measure follows some well-known properties. These are mentioned in the next slide. Proximity Measure with Interval Scale
  • 30. 30 Properties of Minkowski metrics: 1. Non-negativity: a. 𝑑 𝑥, 𝑦 ≥ 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥 𝑎𝑛𝑑 𝑦 b. 𝑑 𝑥, 𝑦 = 0 only if 𝑥 = 𝑦. This is also called identity condition. 2. Symmetry: 𝑑 𝑥, 𝑦 = 𝑑 𝑦, 𝑥 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥 𝑎𝑛𝑑 𝑦 This condition ensures that the order in which objects are considered is not important. 3. Transitivity: 𝑑 𝑥, 𝑧 ≤ 𝑑 𝑥, 𝑦 + 𝑑 𝑦, 𝑧 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑥 , 𝑦 𝑎𝑛𝑑 𝑧. • This condition has the interpretation that the least distance 𝑑 𝑥, 𝑧 between objects 𝑥 𝑎𝑛𝑑 𝑧 is always less than or equal to the sum of the distance between the objects 𝑥 𝑎𝑛𝑑 𝑦, and between 𝑦 𝑎𝑛𝑑 𝑧. • This property is also termed as Triangle Inequality. Proximity Measure with Interval Scale
  • 31. 31 Depending on the value of 𝑟, the distance measure is renamed accordingly. 1. Manhattan distance (L1 Norm: 𝒓 = 1) The Manhattan distance is expressed as 𝑑 = 𝑖=1 𝑛 𝑥𝑖 − 𝑦𝑖 where … denotes the absolute value. This metric is also alternatively termed as Taxicals metric, city-block metric. Example: x = [7, 3, 5] and y = [3, 2, 6]. The Manhattan distance is 7 − 3 + 3 − 2 + 5 − 6 = 6. • As a special instance of Manhattan distance, when attribute values ∈ [0, 1] is called Hamming distance. • Alternatively, Hamming distance is the number of bits that are different between two objects that have only binary values (i.e. between two binary vectors). Proximity Measure with Interval Scale
  • 32. 32 2. Euclidean Distance (L2 Norm: 𝒓 = 2) This metric is same as Euclidean distance between any two points 𝑥 𝑎𝑛𝑑 𝑦 𝑖𝑛 ℛ𝑛. 𝑑(𝑥, 𝑦) = 𝑖=1 𝑛 𝑥𝑖 − 𝑦𝑖 2 Example: x = [7, 3, 5] and y = [3, 2, 6]. The Euclidean distance between 𝑥 𝑎𝑛𝑑 𝑦 is 𝑑 𝑥, 𝑦 = 7 − 3 2 + 3 − 2 2 + 5 − 6 2 = 18 ≈ 2.426 Proximity Measure with Interval Scale
  • 33. 33 3. Chebychev Distance (L∝ Norm: 𝒓 ∈ ℛ) This metric is defined as 𝑑(𝑥, 𝑦) = max ∀𝑖 𝑥𝑖 − 𝑦𝑖 • We may clearly note the difference between Chebychev metric and Manhattan distance. That is, instead of summing up the absolute difference (in Manhattan distance), we simply take the maximum of the absolute differences (in Chebychev distance). Hence, L∝ < L𝟏 Example: x = [7, 3, 5] and y = [3, 2, 6]. The Manhattan distance = 7 − 3 + 3 − 2 + 5 − 6 = 6. The chebychev distance = Max { 7 − 3 , 3 − 2 , 5 − 6 } = 4. Proximity Measure with Interval Scale
  • 34. 34 4. Other metrics: a. Canberra metric: 𝑑(𝑥, 𝑦) = 𝑖=1 𝑛 𝑥𝑖 − 𝑦𝑖 𝑥𝑖 + 𝑦𝑖 𝑞 • where q is a real number. Usually q = 1, because numerator of the ratio is always ≤ denominator, the ratio ≤ 1, that is, the sum is always bounded and small. • If q ≠ 1, it is called Fractional Canberra metric. • If q > 1, the oppositive relationship holds. b. Hellinger metric: 𝑑(𝑥, 𝑦) = 𝑖=1 𝑛 𝑥𝑖 − 𝑦𝑖 2 This metric is then used as either squared or transformed into an acceptable range [-1, +1] using the following transformations. i. 𝑑 𝑥, 𝑦 = (1 − 𝑟(𝑥, 𝑦)) 2 ii. 𝑑 𝑥, 𝑦 = 1 − 𝑟 𝑥, 𝑦 Where 𝑟 𝑥, 𝑦 is correlation coefficient between 𝑥 𝑎𝑛𝑑 𝑦. Note: Dissimilarity measurement is not relevant with distance measurement. Proximity Measure with Interval Scale
  • 35. Proximity Measure for Ratio-Scale 35 The proximity between the objects with ratio-scaled variable can be carried with the following steps: 1. Apply the appropriate transformation to the data to bring it into a linear scale. (e.g. logarithmic transformation to data of the form 𝑋 = 𝐴𝑒𝐵. 2. The transformed values can be treated as interval-scaled values. Any distance measure discussed for interval-scaled variable can be applied to measure the similarity. Note: There are two concerns on proximity measures: • Normalization of the measured values. • Intra-transformation from similarity to dissimilarity measure and vice-versa.
  • 36. Proximity Measure for Ratio-Scale 36 Normalization: • A major problem when using the similarity (or dissimilarity) measures (such as Euclidean distance) is that the large values frequently swamp the small ones. • For example, consider the following data. Here, the contribution of Cost 2 and Cost 3 is insignificant compared to Cost 1 so far the Euclidean distance is concerned. • This problem can be avoided if we consider the normalized values of all numerical attributes. • Another normalization may be to take the estimated values in a normalized range say [0, 1]. Note that, if a measure varies in the range, then it can be normalized as 𝓈′ = 1 1+𝓈 where 𝓈 ∈ 0. . ∞
  • 37. 37 Intra-transformation: • Transforming similarities to dissimilarities and vice-versa is also relatively straightforward. • If the similarity (or dissimilarity) falls in the interval [0..1], the dissimilarity (or similarity) can be obtained as 𝑑 = 1 − 𝓈 or 𝓈 = 1 − 𝑑 • Another approach is to define similarity as the negative of dissimilarity ( or vice-versa). Proximity Measure for Ratio-Scale
  • 38. Proximity Measure with Mixed Attributes 38 • The previous metrics on similarity measures assume that all the attributes were of the same type. Thus, a general approach is needed when the attributes are of different types. • One straightforward approach is to compute the similarity between each attribute separately and then combine these attribute using a method that results in a similarity between 0 and 1. • Typically, the overall similarity is defined as the average of all the individual attribute similarities. • See the algorithm in the next slide for doing this.
  • 39. Similarity Measure with Vector Objects 39 Suppose, the objects are defined with 𝐴1, 𝐴2, … . . , 𝐴𝑛 attributes. 1. For the k-th attribute (k = 1, 2, . . , n), compute similarity 𝓈𝑘(𝑥, 𝑦) in the range [0, 1]. 2. Compute the overall similarity between two objects using the following formula 𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 𝑥, 𝑦 = 𝑖=1 𝑛 𝓈𝑖(𝑥, 𝑦) 𝑛 3. The above formula can be modified by weighting the contribution of each attribute. If the weight 𝑤𝑘 is for the k-th attribute, then 𝑤_𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 𝑥, 𝑦 = 𝑖=1 𝑛 𝑤𝑖𝓈𝑖(𝑥, 𝑦) 𝑛 Such that 𝑖=1 𝑛 𝑤𝑖 = 1. 4. The definition of the Minkowski distance can also be modified as follows: 𝑑 𝑥, 𝑦 = 𝑖=1 𝑛 𝑤𝑖 𝑥𝑖 − 𝑦𝑖 𝑟 1 𝑟 Each symbols are having their usual meanings.
  • 40. Similarity Measure with Mixed Attributes 40 Example 12.6: Consider the following set of objects. Obtain the similarity matrix. [For C: X>A>B>C] Object A (Binary) B (Categorical) C (Ordinal) D (Numeric) E (Numeric) 1 Y R X 475 108 2 N R A 10 10-2 3 N B C 1000 105 4 Y G B 500 103 5 Y B A 80 1 How cosine similarity can be applied to this?
  • 41. Non-Metric similarity 41 • In many applications (such as information retrieval) objects are complex and contains a large number of symbolic entities (such as keywords, phrases, etc.). • To measure the distance between complex objects, it is often desirable to introduce a non- metric similarity function. • Here, we discuss few such non-metric similarity measurements. Cosine similarity Suppose, x and y denote two vectors representing two complex objects. The cosine similarity denoted as cos(𝑥, 𝑦) and defined as cos(𝑥, 𝑦) = 𝑥 ⋅ 𝑦 𝑥 ⋅ 𝑦 • where 𝑥 ⋅ 𝑦 denotes the vector dot product, namely 𝑥 ⋅ 𝑦 = 𝑖=1 𝑛 𝑥𝑖 ⋅ 𝑦𝑖 such that 𝑥 = [𝑥1, 𝑥2, . . , 𝑥𝑛] and 𝑦 = [𝑦1, 𝑦2, . . , 𝑦𝑛]. • 𝑥 and 𝑦 denote the Euclidean norms of vector x and y, respectively (essentially the length of vectors x and y), that is 𝑥 = 𝑥1 2 + 𝑥2 2 +. . +𝑥𝑛 2 and 𝑦 = 𝑦1 2 + 𝑦2 2 +. . +𝑦𝑛 2
  • 42. Cosine Similarity 42 • In fact, cosine similarity essentially is a measure of the (cosine of the) angle between x and y. • Thus if the cosine similarity is 1, then the angle between x and y is 0° and in this case, x and y are the same except for magnitude. • On the other hand, if cosine similarity is 0, then the angle between x and y is 90° and they do not share any terms. • Considering, this cosine similarity can be written equivalently cos 𝑥, 𝑦 = 𝑥 ⋅ 𝑦 𝑥 ⋅ 𝑦 = 𝑥 𝑥 ⋅ 𝑦 𝑦 = 𝑥 ⋅ 𝑦 where 𝑥 = 𝑥 𝑥 and 𝑦 = 𝑦 𝑦 . This means that cosine similarity does not take the magnitude of the two vectors into account, when computing similarity. • It is thus, one way normalized measurement.
  • 43. Non-Metric Similarity 43 Example 12.7: Cosine Similarity Suppose, we are given two documents with count of 10 words in each are shown in the form of vectors x and y as below. x = [3, 2, 0, 5, 0, 0, 0, 2, 0, 0] and y = [1, 0, 0, 0, 0, 0, 0, 1, 0, 2] Thus, 𝑥 ⋅ 𝑦 = 3*1 + 2*0 + 0*0 + 5*0 + 0*0 + 0*0 + 0*0 + 2*1 + 0*0 + 0*2 = 5 𝑥 = 32 + 22 + 0 + 52 + 0 + 0 + 0 + 22 + 0 + 0 = 6.48 𝑦 = 12 + 0 + 0 + 0 + 0 + 0 + 0 + 12 + 0 + 22 = 2.24 ∴ cos 𝑥, 𝑦 = 0.31 Extended Jaccard Coefficient The extended Jaccard coefficient is denoted as 𝐸𝐽 and defined as 𝐸𝐽 = 𝑥 ⋅ 𝑦 ‖x‖2 ⋅ 𝑦 2 − 𝑥 ⋅ 𝑦 • This is also alternatively termed as Tanimoto coefficient and can be used to measure like document similarity. Compute Extended Jaccard coefficient (𝐸𝐽) for the above example 12.7.
  • 44. Pearson’s Correlation 44 • The correlation between two objects x and y gives a measure of the linear relationship between the attributes of the objects. • More precisely, Pearson’s correlation coefficient between two objects x and y is defined in the following. 𝑃 𝑥, 𝑦 = 𝑆𝑥𝑦 𝑆𝑥 ⋅ 𝑆𝑦 where 𝑆𝑥𝑦 = 𝑐𝑜𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 𝑥, 𝑦 = 1 𝑛−1 𝑖=1 𝑛 𝑥𝑖 − 𝑥 𝑦𝑖 − 𝑦 𝑆𝑥 = 𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 𝑥 = 1 𝑛−1 𝑖=1 𝑛 𝑥𝑖 − 𝑥 2 𝑆𝑦 = 𝑆𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 𝑦 = 1 𝑛−1 𝑖=1 𝑛 𝑦𝑖 − 𝑦 2 𝑥 = 𝑚𝑒𝑎𝑛 𝑥 = 1 𝑛 𝑖=1 𝑛 𝑥𝑖 𝑦 = 𝑚𝑒𝑎𝑛 𝑦 = 1 𝑛 𝑖=1 𝑛 𝑦𝑖 and n is the number of attributes in x and y.
  • 45. 45 Note 1: Correlation is always in the range of -1 to 1. A correlation of 1(-1) means that x and y have a perfect positive (negative) linear relationship, that is, 𝑥𝑖 = 𝑎 ⋅ 𝑦𝑖 + 𝑏 for some and b. Example 12.8: Pearson’s correlation Calculate the Pearson's correlation of the two vectors x and y as given below. x = [3, 6, 0, 3, 6] y = [1, 2, 0, 1, 2] Note: Vector components can be negative values as well. Note: If the correlation is 0, then there is no linear relationship between the attribute of the object. Example 12.9: Non-linear correlation Verify that there is no linear relationship among attributes in the objects x and y given below. x = [-3, -2, -1, 0, 1, 2, 3] y = [9, 4, 1, 0, 1, 4, 9] P (x, y) = 0, and also note 𝑥𝑖 = 𝑦𝑖 2 for all attributes here.
  • 46. Mahalanobis Distance 46 • A related issue with distance measurement is how to handle the situation when attributes do not have the same range of values. • For example, a record with two objects Age and Income. Here, two attributes have different scales. Thus, Euclidean distance is not a suitable measure to handle such situation. • In the other way around, how to compute distance when there is correlation between some of the attributes, perhaps, in addition to difference in the ranges of values. • A generalization of Euclidean distance, the mahalanobi’s distance is useful when attributes are (partially) correlated and/or have different range of values. • The Mahalanobi’s distance between two objects (vectors) x and y is defined as 𝑀(𝑥, 𝑦) = (𝑥 − 𝑦) −1 𝑥 − 𝑦 𝑇 Here, −1 is inverse if the covariance matrix.
  • 47. Set Difference and Time Difference 47 Set Difference • Another non-metric dissimilarity measurement is set difference. • Given two sets A and B, A – B is the set of elements of A that are not in B. Thus, if A = {1, 2, 3, 4} and B = {2, 3, 4} then A – B = {1} and B – A = ∅. • We can define d between two sets as d(A, B) as 𝑑(𝐴, 𝐵) = |𝐴 − 𝐵| where 𝐴 denotes the size of set A. Note: This measure does not satisfy the property of Non-negativity, Symmetric and Transitivity. • Another modified definition however satisfies 𝑑 𝐴, 𝐵 = 𝐴 − 𝐵 + |𝐵 − 𝐴| Time Difference • It defines the distance between times of the day as follows 𝑑 𝑡1, 𝑡2 = 𝑡2 − 𝑡1 𝑖𝑓 𝑡1 ≤ 𝑡2 24 + (𝑡2−𝑡1) 𝑖𝑓 𝑡1 ≥ 𝑡2 • Example: 𝑑 1 𝑝𝑚, 2 𝑝𝑚 = 1 ℎ𝑜𝑢𝑟 𝑑 2 𝑝𝑚, 1 𝑝𝑚 = 23 ℎ𝑜𝑢𝑟𝑠.