Neural Networks and Deep Learning: Deeplearning - Ai-Summary
Neural Networks and Deep Learning: Deeplearning - Ai-Summary
Neural Networks and Deep Learning: Deeplearning - Ai-Summary
ai-Summary
mbadry1 Merge pull request #167 from dotslash21/master … on Aug 3, 2019 History
. .
Readme.md
Table of contents
Neural Networks and Deep Learning
Table of contents
Course summary
Introduction to deep learning
What is a (Neural Network) NN?
Supervised learning with neural networks
Why is deep learning taking off?
Neural Networks Basics
Binary classification
Logistic regression
Logistic regression cost function
Gradient Descent
Derivatives
More Derivatives examples
Computation graph
Derivatives with a Computation Graph
Logistic Regression Gradient Descent
Gradient Descent on m Examples
Vectorization
Vectorizing Logistic Regression
Notes on Python and NumPy
General Notes
Shallow neural networks
Neural Networks Overview
Neural Network Representation
Computing a Neural Network's Output
Vectorizing across multiple examples
Activation functions
Why do you need non-linear activation functions?
Derivatives of activation functions
Gradient descent for Neural Networks
Random Initialization
Deep Neural Networks
Deep L-layer neural network
Forward Propagation in a Deep Network
Getting your matrix dimensions right
Why deep representations?
Building blocks of deep neural networks
Forward and Backward Propagation
Parameters vs Hyperparameters
What does this have to do with the brain
Extra: Ian Goodfellow interview
Course summary
Here are the course summary as its given on the course link:
If you want to break into cutting-edge AI, this course will help you do so. Deep learning
engineers are highly sought after, and mastering deep learning will give you numerous
new career opportunities. Deep learning is also a new "superpower" that will let you
build AI systems that just weren't possible a few years ago.
In this course, you will learn the foundations of deep learning. When you finish this
class, you will:
Understand the major technology trends driving Deep Learning
Be able to build, train and apply fully connected deep neural networks
Know how to implement efficient (vectorized) neural networks
Understand the key parameters in a neural network's architecture
This course also teaches you how Deep Learning actually works, rather than presenting
only a cursory or surface-level description. So after completing it, you will be able to
apply deep learning to a your own applications. If you are looking for a job in AI, after
this course you will also be able to answer basic interview questions.
Basically a single neuron will calculate weighted sum of input(W.T*X) and then we can
set a threshold to predict output in a perceptron. If weighted sum of input cross the
threshold, perceptron fires and if not then perceptron doesn't predict.
Disadvantage of perceptron is that it only output binary values and if we try to give
small change in weight and bais then perceptron can flip the output. We need some
system which can modify the output slightly according to small change in weight and
bias. Here comes sigmoid function in picture.
If we change perceptron with a sigmoid function, then we can make slight change in
output.
e.g. output in perceptron = 0, you slightly changed weight and bias, output becomes =
1 but actual output is 0.7. In case of sigmoid, output1 = 0, slight change in weight and
bias, output = 0.7.
If we apply sigmoid activation function then Single neuron will act as Logistic
Regression.
Simple NN graph:
Image taken from tutorialspoint.com
RELU stands for rectified linear unit is the most popular activation function right now
that makes deep NNs train faster now.
Hidden layers predicts connection between inputs automatically, thats what deep
learning is good at.
Supervised learning means we have the (X,Y) and we need to get the function that maps
X to Y.
i. Data:
Using this image we can conclude:
For small data NN can perform as Linear regression or SVM (Support vector
machine)
For big data a small NN is better that SVM
For big data a big NN is better that a medium NN is better that small NN.
Hopefully we have a lot of data because the world is using the computer a
little bit more
Mobiles
IOT (Internet of things)
ii. Computation:
GPUs.
Powerful CPUs.
Distributed computing.
ASICs
iii. Algorithm:
a. Creative algorithms has appeared that changed the way NN works.
For example using RELU function is so much better than using SIGMOID
function in training a NN because it helps with the vanishing gradient
problem.
Binary classification
Mainly he is talking about how to do a logistic regression to make a binary classifier.
Gradient Descent
We want to predict w and b that minimize the cost function.
First we initialize w and b to 0,0 or initialize them to a random value in the convex
function and then try to improve the values the reach minimum value.
The gradient decent algorithm repeats: w = w - alpha * dw where alpha is the learning
rate and dw is the derivative of w (Change to w ) The derivative is also the slope of w
Looks like greedy algorithms. the derivative give us the direction to improve our
parameters.
w = w - alpha * d(J(w,b) / dw) (how much the function slopes in the w direction)
b = b - alpha * d(J(w,b) / db) (how much the function slopes in the d direction)
Derivatives
We will talk about some of required calculus.
You don't need to be a calculus geek to master deep learning but you'll need some
skills from it.
Derivative of a linear line is its slope.
ex. f(a) = 3a d(f(a))/d(a) = 3
if a = 2 then f(a) = 6
if we move a a little bit a = 2.001 then f(a) = 6.003 means that we multiplied
the derivative (Slope) to the moved area and added it to the last result.
To conclude, Derivative is the slope and slope is different in different points in the
function thats why the derivative is a function.
Computation graph
Its a graph that organizes the computation from left to right.
Derivatives with a Computation Graph
Calculus chain rule says: If x -> y -> z (x effect y and y effects z) Then d(z)/d(x) =
d(z)/d(y) * d(y)/d(x)
We compute the derivatives on a graph from right to left and it will be a lot more easier.
dvar means the derivatives of a final output variable with respect to various
intermediate quantities.
X1 Feature
X2 Feature
W1 Weight of the first feature.
W2 Weight of the second feature.
B Logistic Regression parameter.
M Number of training examples
Y(i) Expected output of i
So we have:
Then from right to left we will calculate derivations compared to the result:
From the above we can conclude the logistic regression pseudo code:
# Backward pass
dz(i) = a(i) - Y(i)
dw1 += dz(i) * x1(i)
dw2 += dz(i) * x2(i)
db += dz(i)
J /= m
dw1/= m
dw2/= m
db/= m
# Gradient descent
w1 = w1 - alpha * dw1
w2 = w2 - alpha * dw2
b = b - alpha * db
The above code should run for some iterations to minimize error.
Vectorization is so important on deep learning to reduce loops. In the last code we can
make the whole loop in one step using vectorization!
Vectorization
Deep learning shines when the dataset are big. However for loops will make you wait a
lot for a result. Thats why we need vectorization to get rid of some of our for loops.
NumPy library (dot) function is using vectorization by default.
The vectorization can be done on CPU or GPU thought the SIMD operation. But its
faster on GPU.
Whenever possible avoid for loops.
Most of the NumPy library methods are vectorized version.
As an input we have a matrix X and its [Nx, m] and a matrix Y and its [Ny, m] .
Reshape is cheap in calculations so put it everywhere you're not sure about the
calculations.
Broadcasting works when you do a matrix operation with matrices that doesn't match
for the operation, in this case NumPy automatically makes the shapes ready for the
operation by broadcasting the values.
In general principle of broadcasting. If you have an (m,n) matrix and you add(+) or
subtract(-) or multiply(*) or divide(/) with a (1,n) matrix, then this will copy it m times
into an (m,n) matrix. The same with if you use those operations with a (m , 1) matrix,
then this will copy it n times into (m, n) matrix. And then apply the addition, subtraction,
and multiplication of division element wise.
If you didn't specify the shape of a vector, it will take a shape of (m,) and the
transpose operation won't work. You have to reshape it to (m, 1)
Try to not use the rank one matrix in ANN
Don't hesitate to use assert(a.shape == (5,1)) to check if your matrix shape is
the required one.
If you've found a rank one matrix try to run reshape on it.
Jupyter / IPython notebooks are so useful library in python that makes it easy to
integrate code and document at the same time. It runs in the browser and doesn't need
an IDE to run.
To open Jupyter Notebook, open the command line and call: jupyter-notebook It
should be installed to work.
v = image.reshape(image.shape[0]*image.shape[1]*image.shape[2],1) #reshapes
the image.
General Notes
The main steps for building a Neural Network are:
Define the model structure (such as number of input features and outputs)
Initialize the model's parameters.
Loop.
Calculate current loss (forward propagation)
Calculate current gradient (backward propagation)
Update parameters (gradient descent)
Preprocessing the dataset is important.
Tuning the learning rate (which is an example of a "hyperparameter") can make a big
difference to the algorithm.
kaggle.com is a good place for datasets and competitions.
Pieter Abbeel is one of the best in deep reinforcement learning.
X1 \
X2 ==> z = XW + B ==> a = Sigmoid(z) ==> l(a,Y)
X3 /
X1 \
X2 => z1 = XW1 + B1 => a1 = Sigmoid(z1) => z2 = a1W2 + B2 => a2 =
Sigmoid(z2) => l(a2,Y)
X3 /
X is the input vector (X1, X2, X3) , and Y is the output variable (1x1)
We are talking about 2 layers NN. The input layer isn't counted.
Nx = 3
for i = 1 to m
z[1, i] = W1*x[i] + b1 # shape of z[1, i] is (noOfHiddenNeurons,1)
a[1, i] = sigmoid(z[1, i]) # shape of a[1, i] is (noOfHiddenNeurons,1)
z[2, i] = W2*a[1, i] + b2 # shape of z[2, i] is (1,1)
a[2, i] = sigmoid(z[2, i]) # shape of a[2, i] is (1,1)
In the last example we can call X = A0 . So the previous step can be rewritten as:
Activation functions
So far we are using sigmoid, but in some cases other functions can be a lot better.
Sigmoid can lead us to gradient decent problem where the updates are so low.
Sigmoid activation function range is [0,1] A = 1 / (1 + np.exp(-z)) # Where z is the
input matrix
It turns out that the tanh activation usually works better than sigmoid activation
function for hidden units because the mean of its output is closer to zero, and so it
centers the data better for the next layer.
Sigmoid or Tanh function disadvantage is that if the input is too small or too high, the
slope will be near zero which will cause us the gradient decent problem.
One of the popular activation functions that solved the slow gradient decent is the RELU
function. RELU = max(0,z) # so if z is negative the slope is 0 and if z is positive
the slope remains linear.
So here is some basic rule for choosing activation functions, if your classification is
between 0 and 1, use the output activation as sigmoid and the others as RELU.
Leaky RELU activation function different of RELU is that if the input is negative the slope
will be so small. It works as RELU but most people uses RELU. Leaky_RELU =
max(0.01z,z) #the 0.01 can be a parameter for your algorithm.
g(z) = 1 / (1 + np.exp(-z))
g'(z) = (1 / (1 + np.exp(-z))) * (1 - (1 / (1 + np.exp(-z))))
g'(z) = g(z) * (1 - g(z))
Derivation of Tanh activation function:
g(z) = np.maximum(0,z)
g'(z) = { 0 if z < 0
1 if z >= 0 }
g(z) = np.maximum(0.01 * z, z)
g'(z) = { 0.01 if z < 0
1 if z >= 0 }
NN parameters:
n[0] = Nx
n[1] = NoOfHiddenNeurons
n[2] = NoOfOutputNeurons = 1
W1 shape is (n[1],n[0])
b1 shape is (n[1],1)
W2 shape is (n[2],n[1])
b2 shape is (n[2],1)
Repeat:
Compute predictions (y'[i], i = 0,...m)
Get derivatives: dW1, db1, dW2, db2
Update: W1 = W1 - LearningRate * dW1
b1 = b1 - LearningRate * db1
W2 = W2 - LearningRate * dW2
b2 = b2 - LearningRate * db2
Forward propagation:
Z1 = W1A0 + b1 # A0 is X
A1 = g1(Z1)
Z2 = W2A1 + b2
A2 = Sigmoid(Z2) # Sigmoid because the output is between 0 and 1
Backpropagation (derivations):
Random Initialization
In logistic regression it wasn't important to initialize the weights randomly, while in NN
we have to initialize them randomly.
If we initialize all the weights with zeros in NN it won't work (initializing bias with zero is
OK):
all hidden units will be completely identical (symmetric) - compute exactly the same
function
on each gradient descent iteration all the hidden units will always update the same
We need small values because in sigmoid (or tanh), for example, if the weight is too
large you are more likely to end up even at the very start of training with very large
values of Z. Which causes your tanh or your sigmoid activation function to be saturated,
thus slowing down learning. If you don't have any sigmoid or tanh activation functions
throughout your neural network, this is less of an issue.
Constant 0.01 is alright for 1 hidden layer networks, but if the NN is deep this number
can be changed but it will always be a small number.
n[0] denotes the number of neurons input layer. n[L] denotes the number of
neurons in output layer.
g[l] is the activation function.
a[l] = g[l](z[l])
These were the notation we will use for deep neural network.
So we have:
A vector n of shape (1, NoOfLayers+1)
A vector g of shape (1, NoOfLayers)
A list of different shapes w based on the number of neurons on the previous and
the current layer.
A list of different shapes b based on the number of neurons on the current layer.
Forward Propagation in a Deep Network
Forward propagation general rule for one input:
We can't compute the whole layers forward propagation without a for loop so its OK to
have a for loop here.
The dimensions of the matrices are so important you need to figure it out.
Deep NN blocks:
Forward and Backward Propagation
Pseudo code for forward propagation for layer l:
Input A[l-1]
Z[l] = W[l]A[l-1] + b[l]
A[l] = g[l](Z[l])
Output A[l], cache(Z[l])
Parameters vs Hyperparameters
Main parameters of the NN is W and b
Hyper parameters (parameters that control the algorithm) are like:
Learning rate.
Number of iteration.
Number of hidden layers L .
Number of hidden units n .
Choice of activation functions.
You have to try values yourself of hyper parameters.
In the earlier days of DL and ML learning rate was often called a parameter, but it really
is (and now everybody call it) a hyperparameter.
On the next course we will see how to optimize hyperparameters.