Machine Learning: DR Ajey S.N.R
Machine Learning: DR Ajey S.N.R
Machine Learning: DR Ajey S.N.R
Dr Ajey S.N.R.
Professor & Chairperson
Department of ECE, EC Campus
MACHINE LEARNING
Supervised Non-Parametric
methods of Machine Learning
Unit -3
Dr Ajey S.N.R
Department of Electronics and Communication
Engineering
MACHINE LEARNING
Non Parametric Supervised learning methods.
‘h’ is the length of the interval and instances {xt } that fall
in this interval are assumed to be “close enough.”
∞
−∞ p(x)dx =1
1 𝑥−𝑚 2
p(x) = exp
2πσ 2 σ2
1. The probability
−
that 𝕩 is inside a region ℛ ;
ℛ
P = ℛ + p(𝕩)d𝕩 and
∞
−∞ p(𝕩)d𝕩 =1
MACHINE LEARNING
Density estimation
𝑥 −𝑥 1
xi − x 1, 𝑖𝑘 𝑘 ≤ , k = 1, 2
Introduce φ( )= ℎ 2
h 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Consider that ℛ is a hypercube centered at x (think
about a 2-D square).
Let h be the length of the edge of the hypercube, then
V = h2 for a 2-D square, and V = h 3 for a 3-D cube
MACHINE LEARNING
Parzen windows: density estimate
Examples
MACHINE LEARNING
Examples
MACHINE LEARNING
Decision trees
Oval nodes are the decision nodes and rectangles are leaf nodes. The univariate
decision node splits along one axis, and successive splits are orthogonal to each
other.
77
MACHINE LEARNING
Decision trees
92
MACHINE LEARNING
Decision trees
K
I m pmi log 2 pmi
i 1
98
MACHINE LEARNING
Regression tree
Regression Trees
• Error at node m:
1 if x Xm : x reaches node m
bm x
0 otherwise
1
t m
b x t
r t
2
Em t
t
gm
b x
t
r gm bm x t
Nm t m
After splitting:
E 'm
1
j t
t
2
t
mj
b x
t
r gmj bmj x gmj t
Nm t mj
100
MACHINE LEARNING
Regression Trees
Where
and using this, we can guarantee that the error for any
instance is never larger than a given threshold
The acceptable error threshold is the complexity
parameter; when it is small, we generate large trees and
risk overfitting; when it is large, we underfit and smooth
too much.
MACHINE LEARNING
Regression Trees
Y = β0 + βT X + 𝜖
MACHINE LEARNING
Regression Trees
That is, suppose the points (x1 , y1),(x2, y2), . . .(xc, yc) are
all the samples belonging to the leaf-node l. Then our
model for l is just
• Find the variable and split that min impurity (among all
variables -- and split positions for numeric variables)
113
MACHINE LEARNING
114
MACHINE LEARNING
Pruning Trees
115
MACHINE LEARNING
C4.5Rules
(Quinlan, 1993)
116
MACHINE LEARNING
Learning Rules
117
MACHINE LEARNING
Learning Rules
118
119
MACHINE LEARNING
Multivariate Trees
120
MACHINE LEARNING
Regression Trees
MACHINE LEARNING
Introduction
Examples
THANK YOU
Dr Ajey SNR
Department of ECE
ajeysnr@pes.edu
+91 80 66186626