A Gentle Introduction To Neural Networks With Python
A Gentle Introduction To Neural Networks With Python
to Neural Networks
(with Python)
2403343781289312
+ 2843033712837981
+ 2362142787897881
+ 3256541312323213
+ 9864479802118978
+ 8976677987987897
+ 8981257890087988
= ?
AI is Huge!
Google’s and Go
Ideas
Simple Predicting Machine
Simple Predicting Machine
Kilometres to Miles
not great
Kilometres to Miles
better
Kilometres to Miles
worse !
Kilometres to Miles
best yet !
Key Points
a
g w o rk s e xactly? Try
in
’t k n ow how someth .
1. D o n
d ju s ta b le parameters
a
model with
.
re fi n e th e parameters
or to
2. Use the err
Garden Bugs
Classifying Bugs
Classifying Bugs
Classifying Bugs
Classifying Bugs
Key Points
things.
k in d a lik e predicting
thing s is
1. Classifying
Learning from Data
E = (A + ΔA)x - Ax
ΔA = E / x
Hang On!
Oh no!
each update ignores
previous examples
Calm Down the Learning
ΔA = L · (E / x)
learning rate
Calm Down the Learning
you
ing is g o o d - ensures
your learn pact of
1. Moderating nd re d u c e s im
rn fro m all your data, a
lea .
o r n o is y training data
outliers
Boolean Logic
0 0 0 0
0 1 0 1
1 0 0 1
1 1 1 1
Boolean Logic
Boolean Logic
XOR Puzzle!
0 0 0
0 1 1
1 0 1
1 1 0
XOR Solution!
gle
e s o lv e d w ith just a sin
n’t b
So m e p roblems ca
1.
r classifier.
simple linea
e s w o rk in g together to
nod
o u c a n u s e multiple
2. Y th e se problem
s.
a n y o f
solve m
Brains in Nature
Brains in Nature
37 billion neurons
(humans 20 billion)
https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons
https://faculty.washington.edu/chudler/facts.html
Brains in Nature
Brains in Nature
logistic function
y = 1 / (1 + e-x)
Brains in Nature
Artificial Neuron
Artificial Neural Network .. finally!
Pause.
...
Where Does The Learning Happen?
is ti c a te d things, and
do sop h ct
N a tu ra l brains can m a g e and imperfe
1. to d a
ly resilient
are incredib it io n a l c omputing.
like tr a d
signals .. un
d
a l b ra in s p artly inspire
logic
T ry in g to copy bio
2 .
l n e u r a l n etworks.
artificia
it’s
ju s ta b le p arameter -
e ad
L in k w e ights are th
3. s.
th e le a rn ing happen
where
Feeding Signals Forward
Feeding Signals Forward
Feeding Signals Forward
Matrix Multiplication
Matrix Multiplication
W·I = X
dot product
Key Points
e
rd c a lc u la tions can b
a no
. T he m any feedforw a tr ix m u lt iplication,
1 as m
x p re s s e d concisely
e
s h a p e th e network.
t
matter wha
g u a g e s c a n do matrix
lan
S o m e p rogramming tl y a nd quickly
.
2. ic ie n
n really eff
multiplicatio
Network Error
Network Error
Internal Error
Internal Error
Matrices Again!
Key Points
we
e e r r o r to guide how
th
R e m e m ber we use - li n k weights.
1 . m e te r
del’s para
refine a mo
e
u t n o d e s is easy - th
tp
or at the ou nd actual
2. The err d e s ir e d a
etween the
difference b
outputs.
o d e s is n ’t obvious. A
l n
. T h e e rr or at interna lit it in pr o portion to
3 to s p
e u ris ti c a pproach is
h
hts.
the link weig
be
a g a ti n g th e error can
back prop too!
4. … and m a trix m ultiplication
as a
expressed
Yes, But How Do We Actually Update The Weights?
Aaarrrggghhh !!
Perfect is the Enemy of Good
e
cti c a l w a y of finding th
a pra
G r ad ie n t descent is
1.
d if fic u lt fu nctions.
f
minimum o
by
n c e o f o v e rshooting
ha
Y o u ca n avoid the c d ie n t g e ts shallower.
2. eg ra
in g s m a ll e r steps if th
tak
tion
o rk is a d ifficult func
etw
h e e rr o r o f a neural n g ra dient desce
nt
3. T m a y b e
eights … so
of the link w
will help ...
Climbing Down the Network Error Landscape
E = (desired - actual)2
dE/dwij = - ej . oj . (1 - oj) . oi
previous node
http://makeyourownneuralnetwork.blogspot.co.uk/2016/01/a-gentle-introduction-to-calculus.html
Updating the Weights
Neural Network
Class
matrix maths
numpy
scipy
matplotlib
notebook
Function - Initialise
# learning rate
self.lr = learningrate
pass
numpy.random.normal()
random initial weights
Function - Query
return final_outputs
# update the weights for the links between the hidden and output layers
self.who += self.lr * numpy.dot((output_errors * final_outputs * (1.0 - final_outputs)),
numpy.transpose(hidden_outputs))
# update the weights for the links between the input and hidden layers
self.wih += self.lr * numpy.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)),
numpy.transpose(inputs))
pass
update weights
Handwriting
Handwritten Numbers Challenge
MNIST Datasets
MNIST dataset:
60,000 training data examples
10,000 test data examples
MNIST Datasets
label
784 pixels
values
28 by 28 pixel image
Output Layer Values
Experiments
random processes
do go wonky!
More Experiments
98% is amazing!
Thoughts
Peek Inside The Mind Of a Neural Network?
Peek Inside The Mind Of a Neural Network?
live demo!
Finding Out More
makeyourownneuralnetwork.blogspot.co.uk
github.com/makeyourownneuralnetwork
www.amazon.co.uk/dp/B01EER4Z4G
twitter.com/myoneuralnet
slides goo.gl/JKsb62
Raspberry Pi Zero