Chapter One
Chapter One
Chapter One
Neural networks are artificial systems that were inspired by biological neural networks. These
systems learn to perform tasks by being exposed to various datasets and examples without any
task-specific rules. The idea is that the system generates identifying characteristics from the data
they have been passed without being programmed with a pre-programmed understanding of
these datasets.
The example below will help you understand neural networks: Consider a scenario where you
have a set of labeled images, and you have to classify the images based on if it is a dog or a cat.
To create a neural network that recognizes images of cats and dogs. The network starts by
processing the input. Each image is made of pixels. For example, the image dimensions might be
20 X 20 pixels that make 400 pixels. Those 400 pixels would make the first layer of our neural
network.
A neural network is made of artificial neurons that receive and process input data. Data is passed
through the input layer, the hidden layer, and the output layer. A neural network process starts
when input data is fed to it. Data is then processed via its layers to provide the desired output. A
neural network learns from structured data and exhibits the output.
Learning taking place within neural networks can be in three different categories:
Supervised Learning - with the help of labeled data, inputs, and outputs are fed to the
algorithms. They then predict the desired result after being trained on how to interpret
data.
Unsupervised Learning - ANN learns with no human intervention. There is no labeled
data, and output is determined according to patterns identified within the output data.
Reinforcement Learning - the network learns depending on the feedback you give it.
Th
e neurons receive many inputs and process a single output.
Neural networks are comprised of layers of neurons.
These layers consist of the following:
Input layer
Multiple hidden layers
Output layer
The input layer receives data represented by a numeric value. Hidden layers perform the most
computations required by the network. Finally, the output layer predicts the output.
In a neural network, neurons dominate one another. Each layer is made of neurons. Once the
input layer receives data, it is redirected to the hidden layer. Each input is assigned with weights.
The weight is a value in a neural network that converts input data within the network’s hidden
layers. Weights work by input layer, taking input data, and multiplying it by the weight value.
It then initiates a value for the first hidden layer. The hidden layers transform the input data and
pass it to the other layer. The output layer produces the desired output.
The inputs and weights are multiplied, and their sum is sent to neurons in the hidden
layer. Bias is applied to each neuron. Each neuron adds the inputs it receives to get the sum. This
value then transits through the activation function.
The activation function outcome then decides if a neuron is activated or not. An activated neuron
transfers information into the other layers. With this approach, the data gets generated in the
network until the neuron reaches the output layer this is called as forward propagation. Feed-
forward propagation is the process of inputting data into an input node and getting the output
through the output node.
Feed-forward propagation takes place when the hidden layer accepts the input data. Processes it
as per the activation function and passes it to the output. The neuron in the output layer with the
highest probability then projects the result.
If the output is wrong, backpropagation takes place. While designing a neural network, weights
are initialized to each input. Backpropagation means re-adjusting each input’s weights to
minimize the errors, thus resulting in a more accurate output.
1.3. Types of Neural Networks
Neural networks are identified based on mathematical performance and principles to determine
the output. Below we will go over different types of neural networks.
a. Single-layer neural network (Perceptron network)
It is one of the simplest models that can learn and solve complex data problems using neural
networks. Perceptron is also called an artificial neuron.
A perceptron network is comprised of two layers:
Input Layer
Output Layer
The input layer computes the weighted input for every node. The activation function is pertained
to get the result as output.
RNN are used to solve problems in stock predictions, text data, and audio data.
In other words, it’s used to solve similar problems in text-to-speech conversion and language
translation.
e. Convolution Neural Network
Convolutional Neural Networks (CNN) are commonly used for image recognition. CNNs
contain three-dimensional neuron arrangement.
The first stage is the convolutional layer. Neurons in a convolutional layer only process
information from a small part of the visual field (image). Input features in convolution are
abstracted in batches.
The second stage is pooling. It reduces the dimensions of the features and, at the same time,
sustaining valuable data.
CNNs launch into the third phase (fully connected neural network) when the features get to the
right granularity level.
At the final stage, the final probabilities are analyzed and decide which class the image belongs
to.
This type of network understands the image in parts. It also computes the operations multiple
times to complete the processing of the image.
Image processing involves conversion from RGB to a grey-scale. After the image is processed,
modifications in pixel value aid in identifying the edges. The images also get grouped into
different classes.
CNN is mainly used in signal and image processing.
f. Modular Neural Network
A Modular Neural Network (MNN) is composed of unassociated networks working individually
to get the output. The various neural networks do not interact with each other. Each network has
a unique set of inputs compared to other networks.
MNN is advantageous because large and complex computational processes are done faster.
Processes are broken down into independent components, thus increasing the computational
speed.