Session NN
Session NN
Session NN
Post-Graduate Diploma in
ML/AI
12-07-2020
Course : Machine Learning
Lecture On : Neural
Edit Master Network - Intro
text styles
12-07-2020 2
Session - Agenda
➢ Introduction, Industry Use-cases
➢ Perceptron
➢ Feed Forward
➢ Backpropagation
➢ Assignment - Problem Statement
➢ Doubt Resolution
12-07-2020 3
Data Science Certification
In which of the following applications can we use deep learning to solve the
problem?
12-07-2020 4
In which of the following applications can we use deep learning to solve the
problem?
12-07-2020 5
Neural Networks in Business
Voice Assistants Recommendation Engines
The Terminator
Google
• Search
• Translate
• maps
Facial Recognition
Self-driving cars
Image
Understanding
Email filtering
12-07-2020 7
Construction of an Artificial Neuron
The data in the network flows through each neuron by a connection. Every
connection has a specific weight by which the flow of data is regulated.
12-07-2020 8
Poll
Click1to add Title
Assume a simple MLP model with 3 neurons and inputs= 1,2,3. The weights
to the input neurons are 4,5 and 6 respectively. Assume the activation
function is a linear constant value of 3. What will be the output ?
A) 32
B) 643
C) 96
D) 48
Practice in teams of 4 students
Industry expert mentoring to learn better
Get personalised feedback for improvements
12-07-2020 Footer 11 9
Poll
Click1to add Title
Assume a simple MLP model with 3 neurons and inputs= 1,2,3. The weights
to the input neurons are 4,5 and 6 respectively. Assume the activation
function is a linear constant value of 3. What will be the output ?
A) 32
B) 643
C) 96
D) 48
Practice in teams of 4 students
The output will be calculated Industry
as 3(1*4+2*5+6*3) = learn
expert mentoring to 96 better
Get personalised feedback for improvements
12-07-2020 Footer 11 10
But how does it work?
All the inputs x are multiplied with Add all the multiplied values and Apply that weighted sum to the
their weights w. Let’s call it k. call them Weighted Sum correct Activation Function.
12-07-2020 11
Why do we need Weights and Bias?
12-07-2020 12
Poll
Click2to add Title
Statement 1: It is possible to train a network well by initializing all the weights as 0
Statement 2: It is possible to train a network well by initializing biases as 0
Which of the statements given above is true?
12-07-2020 Footer 11 13
Poll
Click2to add Title
Statement 1: It is possible to train a network well by initializing all the weights as 0
Statement 2: It is possible to train a network well by initializing biases as 0
Which of the statements given above is true?
Even if all the biases are zero, there is a chance that neural network may learn. On the
Practice in teams of 4 students
other hand, if all the weights are zero; the neural neural network may never learn to
Industry expert mentoring to learn better
perform the task.
Get personalised feedback for improvements
12-07-2020 Footer 11 14
Why we use Activation functions
https://ai.stackexchange.com/questions/5493/what-is-the-purpose-of-an-activation-function-in-neural-networks
12-07-2020 15
why can’t we do it without activating the input signal
That is why we use Artificial Neural network techniques such as Deep learning to make sense of something
complicated ,high dimensional, non-linear -big datasets, where the model has lots and lots of hidden
layers in between and has a very complicated architecture which helps us to make sense and extract
knowledge form such complicated big datasets.
https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6
12-07-2020 16
which one is better to use ?
12-07-2020 17
Poll
Click3to add Title
Which of following activation function can’t be used at output layer to
classify an image ?
A) sigmoid
B) Tanh
C) ReLU
D) If(x>5,1,0)
E) None of the above
Practice in teams of 4 students
Industry expert mentoring to learn better
Get personalised feedback for improvements
12-07-2020 Footer 11 18
Poll
Click3to add Title
Which of following activation function can’t be used at output layer to
classify an image ?
A) sigmoid
B) Tanh
C) ReLU
D) If(x>5,1,0)
E) None of the above
Practice in teams of 4 students
Solution: C Industry expert mentoring to learn better
ReLU gives continuous outputGet inpersonalised
range 0 tofeedback for improvements
infinity. But in output layer, we
want a finite range of values. So option C is correct.
12-07-2020 Footer 11 19
Multiple Neurons
12-07-2020 20
Let’s Add a Bit of Complexity Now
12-07-2020 21
Feedforward
12-07-2020 22
Example
12-07-2020 23
Example
This guarantees all values are between [0,z] and they add up to 1.
12-07-2020 24
Poll
Click4to add Title
The number of nodes in the input & output layers are 10 and there are 3
hidden layer having 5 nodes each what is the total number of network
parameters
A) 150
B) 170
C) 220
D) It is an arbitrary value
Practice in teams of 4 students
Industry expert mentoring to learn better
Get personalised feedback for improvements
12-07-2020 Footer 11 25
Poll
Click4to add Title
The number of nodes in the input & output layers are 10 and there are 3
hidden layer having 5 nodes each what is the total number of network
parameters
A) 150
B) 170
C) 220
D) It is an arbitrary value
Practice in teams of 4 students
Solution: B Industry expert mentoring to learn better
(50 + 25+25+50) + 20. Get personalised feedback for improvements
12-07-2020 Footer 11 26
Cost Function
lower error between the actual and the predicted values signifies that the algorithm has done a
good job in learning.
A common measure of the discrepancy between the two values is the ”Cross-entropy”
12-07-2020 27
Backward Propagation
We know that propagation is used to calculate the gradient of the loss function with respect
to the parameters.
12-07-2020 28
Backward Propagation
12-07-2020 29
Training
12-07-2020 30
Gradient Descent
Starting at the top of the mountain, we take our first
step downhill in the direction specified by the negative
gradient. Next we recalculate the negative gradient
(passing in the coordinates of our new point) and take
another step in the direction it specifies. We continue
this process iteratively until we get to the bottom of our
graph, or to a point where we can no longer move
downhill–a local minimum.
12-07-2020 31
#LifeKoKaroLift
Thank You!