Deep Learning
Deep Learning
Deep Learning
Artificial neural networks are built on the principles of the structure and
operation of human neurons. It is also known as neural networks or neural
nets. An artificial neural network’s input layer, which is the first layer,
receives input from external sources and passes it on to the hidden layer,
which is the second layer. Each neuron in the hidden layer gets information
from the neurons in the previous layer, computes the weighted total, and
then transfers it to the neurons in the next layer. These connections are
weighted, which means that the impacts of the inputs from the preceding
layer are more or less optimized by giving each input a distinct weight. These
weights are then adjusted during the training process to enhance the
performance of the model.
Artificial neurons, also known as units, are found in artificial neural networks.
The whole Artificial Neural Network is composed of these artificial neurons,
which are arranged in a series of layers. The complexities of neural networks
will depend on the complexities of the underlying patterns in the dataset
whether a layer has a dozen units or millions of units. Commonly, Artificial
Neural Network has an input layer, an output layer as well as hidden layers.
The input layer receives data from the outside world which the neural
network needs to analyze or learn about.
In a fully connected artificial neural network, there is an input layer and one
or more hidden layers connected one after the other. Each neuron receives
input from the previous layer neurons or the input layer. The output of one
neuron becomes the input to other neurons in the next layer of the network,
and this process continues until the final layer produces the output of the
network. Then, after passing through one or more hidden layers, this data is
transformed into valuable data for the output layer. Finally, the output layer
provides an output in the form of an artificial neural network’s response to
the data that comes in.
Units are linked to one another from one layer to another in the bulk of neural
networks. Each of these links has weights that control how much one unit
influences another. The neural network learns more and more about the data
as it moves from one unit to another, ultimately producing an output from the
output layer.
Xavier Initialization
The goal of Xavier Initialization is to initialize the weights such that the variance of the
activations are the same across every layer. This constant variance helps prevent the
gradient from exploding or vanishing.
what is the variance?
Variance is a measure of how data points differ from the mean. According to Layman, a
variance is a measure of how far a set of data (numbers) are spread out from their mean
(average) value. Variance means to find the expected difference of deviation from actual
value.
-0.31622776601683794 0.31622776601683794
-0.3162093939621687 0.3159240408550899
0.012495820714597164 0.1806882270049287
Syntax: matplotlib.pyplot.errorbar(x, y, yerr=None, xerr=None, fmt=”,
ecolor=None, elinewidth=None, capsize=None, barsabove=False,
lolims=False, uplims=False, xlolims=False, xuplims=False, errorevery=1,
capthick=None, \*, data=None, \*\*kwargs) Parameters: This method accept
the following parameters that are described below:
x, y: These parameter are the horizontal and vertical coordinates of the
data points.
fmt: This parameter is an optional parameter and it contains the string
value.
xerr, yerr: These parameter contains an array.And the error array should
have positive values.
ecolor: This parameter is an optional parameter. And it is the color of the
errorbar lines with default value NONE.
elinewidth: This parameter is also an optional parameter. And it is the
linewidth of the errorbar lines with default value NONE.
capsize: This parameter is also an optional parameter. And it is the
length of the error bar caps in points with default value NONE.
barsabove: This parameter is also an optional parameter. It contains
boolean value True for plotting errorsbars above the plot symbols.Its
default value is False.
lolims, uplims, xlolims, xuplims: These parameter are also an optional
parameter. They contain boolean values which is used to indicate that a
value gives only upper/lower limits.
errorevery: This parameter is also an optional parameter. They contain
integer values which is used to draws error bars on a subset of the data.
Returns: This returns the container and it is comprises of the following:
plotline:This returns the Line2D instance of x, y plot markers and/or line.
caplines:This returns the tuple of Line2D instances of the error bar caps.
barlinecols:This returns the tuple of LineCollection with the horizontal
and vertical error ranges.
All these contribute to the flexibility of the model. For instance, a model
that does not match a data set with a high bias will create an inflexible
model with a low variance that results in a suboptimal machine
learning model.