Module-2
Module-2
Intelligence
Introduction to Neural Networks and
Deep Learning Frameworks
Program Elements
07 Computational Graph 08 in TensorFlow
Typically, artificial neural networks have a layered structure. The Input Layer picks up the input signals and passes them on to the next layer, also
known as the ‘Hidden’ Layer (there may be more than one Hidden Layer in a neural network). Last comes the Output Layer that delivers the result
A neural network is a computer simulation of the way biological neurons work within a human brain
Dendrites: These branch-like structures extending away from the cell body
receive messages from other neurons and allow them travel to the cell body
Axon: An axon carries an electrical impulse from the cell body to another
neuron
classification
▪ The three arrows correspond to the three inputs coming into the network
▪ Values [0.7, 0.6, and 1.4] are weights assigned to the corresponding input
▪ Inputs get multiplied with their respective weights and their sum is taken
Will it rain, if I
wear a blue
shirt?
Humidity x1
Output
Blue shirt x2
the stack
network
tasks
Binary Step
Sigmoid
Tanh
ReLU
Leaky ReLU
Softmax
Copyright Intellipaat. All rights reserved.
Identity Function
• A straight line function where activation is proportional to input
• No matter how many layers we have, if all of them are linear in nature, the final activation function of the last
layer will be nothing but just a linear function of the input of the first layer
• Range: (−∞,∞)
𝒇 𝒙 =𝒙
discontinuous function
• Its value is 0 for the negative argument and 1 for the positive argument
• When we apply the weighted sum in the place of x, the values are scaled in between 0 and 1
• Large negative numbers are scaled toward 0, and large positive numbers are scaled toward 1
• Range: (0,1)
𝟏
𝒇 𝒙 =
𝟏 + ⅇ−𝒙
• The Tanh activation works almost always better than sigmoid functions as optimization is easier in this method
• The advantage of Tanh is that it can deal more easily with negative numbers
• Range: (−1,1)
𝟐
𝒇 𝒙 : 𝒕𝒂𝒏 𝒉 𝒙 = 𝒙 −𝟏
𝟏+ⅇ−𝟐
• This function allows only the maximum values to pass during the front propagation as shown in the graph below
• Range: (0,∞)
𝟎 𝒇𝒐𝒓 𝒙 < 𝟎
𝒇 𝒙 =ቐ
𝒙 𝒇𝒐𝒓 𝒙 = > 𝟎
• Range: (−∞,∞)
• It is useful for finding out the class which has the max. probability
• The Softmax function is ideally used in the Output Layer of the classifier where we are actually trying to attain the
• Range: (0,1)
ⅇ𝒛 𝒋
𝜎 𝒛 𝒋 = σ𝑲 𝒛 , 𝒋 = 𝟏, 𝟐, . . 𝑲
𝒌=𝟏 ⅇ 𝒌
By training a perceptron, we try to find a line, plane, or some hyperplane which can accurately separate two
classes by adjusting weights and biases
Input
W1
Bias
X1
Output
Calculate the sum
Update weights
X3 W3 and pass through an
activation function 𝜕𝐸
Wnew = Wold – LR*( )
𝜕𝑤
Stop
Keras
PyTorch
DL4J
TensorFlow is an open-source software library for high-performance numerical computations
MXNet
Developed by Google
Keras
PyTorch
DL4J
Google Translate
Keras
PyTorch
DL4J
MXNet
Tensor
Used for visualizing TensorFlow computations and graphs
Board
Keras
PyTorch
DL4J
MXNet
Keras
PyTorch
DL4J
MXNet
A recurrent neural network
Keras
PyTorch
DL4J
MXNet
Keras
PyTorch
DL4J
MXNet
Keras
PyTorch
DL4J
MXNet
‘Pythonic’ in nature
Keras
PyTorch
DL4J
MXNet
Keras
PyTorch
DL4J
MXNet
Keras
PyTorch
DL4J
MXNet
Keras
PyTorch
DL4J
MXNet
Image recognition Fraud detection
Parts of speech
Text mining
tagging
Natural language
processing
Keras
PyTorch
DL4J
MXNet
Developed by Apache
Software Foundation
Keras
PyTorch
DL4J
MXNet
Keras
PyTorch
DL4J
MXNet
Speech
Imaging
recognition
Forecasting NLP
Tensor is given
as an input to a
neural network
Tensor
a = 10
b = 20 Addition
c = 30 c
ℎ = 𝑎∗𝑏 +𝑐
Multiplication
a b
h h
a = 10
b = 20 Addition Addition
c = 30 c c
ℎ = 𝑎∗𝑏 +𝑐
Multiplication Multiplication
a b a b
Session
a b
Placeholder
Placeholder
Placeholder
A True
B False
A True
B False
A 1
B 2
C 3
D 4 or more
A 1
B 2
C 3
D 4 or more
A Yes
B No
A Yes
B No
sales@intellipaat.com