Advanced Machine Learning
Advanced Machine Learning
I think that it is a relatively good approximation to truth which is much too complicated
to allow anything but approximations that mathematical ideas originate in empirics. -
John von Neumann
If the elementary functions are not chosen properly, then there will
always be an error no matter how large ñ is
If the elementary functions are not chosen properly, then there will
always be an error no matter how large ñ is
One requirement that we have seen is that φ−1 (·) must exist. This
condition is met if the elementary functions constitute a basis i.e. are
linearly independent
If the elementary functions are not chosen properly, then there will
always be an error no matter how large ñ is
One requirement that we have seen is that φ−1 (·) must exist. This
condition is met if the elementary functions constitute a basis i.e. are
linearly independent
Fourier series, Wavelets are two widely used bases
If the elementary functions are not chosen properly, then there will
always be an error no matter how large ñ is
One requirement that we have seen is that φ−1 (·) must exist. This
condition is met if the elementary functions constitute a basis i.e. are
linearly independent
Fourier series, Wavelets are two widely used bases
In neural networks, the bases are dependent on the data (as opposed
to being xed), and (ii) the co-ecients (weights) are adapted as
opposed to analytically computed
…
…
0
x <0
1 1
ζ(x) = sgn(x) + = undened x = 0 (11)
2 2
1 x >0
Then,
l
∆x ∆x
(12)
X
(i) (i) (i)
f (x) = y ζ x −x + −ζ x −x −
i=1
2 2
Then,
l
∆x ∆x
(12)
X
(i) (i) (i)
f (x) = y ζ x −x + −ζ x −x −
i=1
2 2
Now,
(i) ∆x (i) ∆x
ζ x −x + −ζ x −x −
2 2
1 1
∆x ∆x
= sgn x − x + (i)
− sgn x − x −
(i)
(13)
2 2 2 2
1
w0 = 0.5, w1 = 0.1
w0 = -5, w1 = 0.8
w0 = -1, w1 = -0.1
0.8
0.6
0.4
0.2
0.5
-0.5
Neuron 1
Neuron 2
Neuron 3 Advanced Machine Learning 19 / 20
Classication
0.4
Superposition (Outut)
Threshold
0.2
-0.2
0.4
Superposition (Outut)
Threshold
0.2
-0.2
0.4
Superposition (Outut)
Threshold
0.2
-0.2