lect8_dnn (1)
lect8_dnn (1)
lect8_dnn (1)
0 0 0
(0, 1) (1, 1)
0 1 1
1 0 1
1 1 0
(0, 0) (1, 0)
Logic XOR () operation
Artificial neural network example
∑❑
Neuron
(𝑖) wm
𝑋𝑚
+b Activation function
0 0 0
(0, 1) (1, 1)
0 1 0
1 0 0
1 1 1
(0, 0) (1, 0)
Logic AND () operation
Training for the logic AND with a single
neuron
b=0
=0 =0
∑ Sigmoid(s) O=Sigmoid(0)=0.5
𝑋 2=1 =0
Activation function
+
𝑋 2=1 =0
Activation function
∑ Sigmoid(s) O=Sigmoid(0)=0.5
𝑋 2=1 =0
Activation function
+
=
= = sigmoid(s) (1-sigmoid(s)) = 0.5 (1-0.5) = 0.25, = = 0
To update : = 0 – 0.1*0.5*0.25*0 = 0
Assume rate = 0.1
Training for the logic AND with a single
neuron
b=0
=0 =0
∑ Sigmoid(s) O=Sigmoid(0)=0.5
𝑋 2=1 =0
Activation function
+
=
= = sigmoid(s) (1-sigmoid(s)) = 0.5 (1-0.5) = 0.25, = = 1
To update : = 0 – 0.1*0.5*0.25*1 = -0.0125
Training for the logic AND with a single
neuron
b=0
=0 =0
∑ Sigmoid(s) O=Sigmoid(0)=0.5
𝑋 2=1 =0
Activation function
+
=
= = sigmoid(s) (1-sigmoid(s)) = 0.5 (1-0.5) = 0.25, = 1
To update b: b = 0 – 0.1*0.5*0.25*1 = -0.0125
Training for the logic AND with a single
neuron
b=-0.0125
=0 =0
∑ Sigmoid(s) O=Sigmoid(0)=0.5
𝑋 2=1
=-0.0125
Activation function
+
Notations:
o N0, N1, N2: sizes of the input layer, hidden layer, and output layer, respectively
o N0N1 weights from input layer to hidden layer. the weight from input unit i to
hidden unit j. B0[N1] biases. B0
o N1N2 weights from hidden layer to output layer. the weight from hidden unit i to
output unit j. B1[N2] biases. B1
o,
3-level feedforward neural network
Output: OO[N2]
Weight: W0[N0][N1]
N
Input layer 1 2
0 Hidden layer weighted sum: HS[N1]
Input: IN[N0] Output layer weighted sum: HS[N2]
Forward propogation (compute OO and E)
Compute hidden
o(
o In matrix form: (HS
Forward propogation
Compute final
o(
o In matrix form: O (OS
]
In matrix form: =
This can be stored in an array dE_OO[N2];
Backward propogation
In matrix form: =
This can be stored in an array dE_OS[N2];
Backward propogation
Hence,
Backward propogation
Hence,
Backward propogation
In matrix form:
Backward propogation
= + = +
Backward propogation
H1 H2
IN layer O X Layer 1 Layer 2 Layer 3 Y
𝜕𝐸 𝜕𝐸
𝜕 𝐼𝑁 layer 𝜕𝑂