Preceptron
Preceptron
Preceptron
Berrin Yanikoglu
Perceptron
• A single artificial neuron that computes its weighted input and
uses a threshold activation function.
+
a = hardlim(n) = [1 2]p + -2
w1 , 1 = 1 w1 , 2 = 2 -
Decision Boundary: all points p for which wTp + b =0
If we have the weights and not the bias, we can take a point on
the decision boundary, p=[2 0]T, and solving for [1 2]p + b = 0,
we see that b=-2.
p
w wT.p = ||w||||p||Cosθ
θ
proj. of p onto w
Decision Boundary
proj. of p onto w
T T = ||p||Cosθ
1w p + b = 0 1w p = -b
œb
= wT.p/||w||
• All points on the decision boundary have the same inner
product (= -b) with the weight vector
• Therefore they have the same projection onto the weight
vector; so they must lie on a line orthogonal to the weight
vector
ADVANCED
An
Illustrative
Example
Boolean OR
⎧ 0 , t = 0 ⎫ ⎧ 0 , t = 1 ⎫ ⎧ 1 ,t = ⎫ ⎧ 1 , t = 1 ⎫
p
⎨ 1 = 1 ⎬ p
⎨ 2 = 2 ⎬ ⎨ p3 = 3 1 ⎬ p
⎨ 4 = 4 ⎬
⎩ 0 ⎭ ⎩ 1 ⎭ ⎩ 0 ⎭ ⎩ 1 ⎭
Given the above input-output pairs (p,t), can you find (manually)
the weights of a perceptron to do the job?
Boolean OR Solution
1) Pick an
admissable decision
boundary
w = 0.5
1
0.5
T 0 + b = 0.25 + b = 0
1 w p + b = 0.5 0.5 ⇒ b = œ0.25
0.5
Multiple-Neuron Perceptron: Matrix Form
weights of one neuron
in one row of W. w 1, 1 w 1, 2 … w 1, R
w w … w 2, R
W = 2, 1 2, 2
3x2
w S, 1 w S, 2 … w S, R
T
1w
w i, 1
i T
W = 2w w i, 2
iw =
2x1
T
Sw w i, R
ADVANCED
T
ai = har dlim(ni ) = hardlim( iw p + bi)
Multiple-Neuron Perceptron
x1
W1=0.5
Σ
W2=0.5
x2 W0 = -0.8
X0=1
Perceptron Limitations
For a linearly not-separable problem:
– Would it help if we use more layers of neurons?
– What could be the learning rule for each neuron?
• We also talked about how more than one nodes may indicate
convex (open or closed) regions