ML tushar assignment
ML tushar assignment
ML tushar assignment
Assignment no 4
Machine learning
Submitted to : M/s Shilpi Bansal
Q1 What is an Artificial Neural Network (ANN) and how does it draw inspiration from biological neural
networks?
Ans:
An Artificial Neural Network (ANN) is a computational model inspired by the way biological neural networks,
such as the human brain, process information. It is designed to recognize patterns and learn from data through a
system of interconnected nodes, or "neurons," organized in layers.
Key Components of an ANN
1. Neurons (Nodes): Each neuron in an ANN is a simple processing unit that receives input, applies a
weight to it, and passes it through an activation function. This mimics the way biological neurons
receive input from other neurons, process it, and decide whether to activate.
2. Layers: ANNs are structured in layers:
o Input Layer: Receives the raw data or input values.
o Hidden Layers: Layers between the input and output where most processing happens, allowing
the network to detect complex patterns.
o Output Layer: Produces the final output or prediction.
3. Weights and Biases: Each connection between neurons has a weight, which determines the importance
of the input. Weights are adjusted during training to minimize errors. Biases are additional parameters
that help the model fit the data more flexibly.
4. Activation Functions: Non-linear functions applied at each neuron to introduce non-linearity, allowing
ANNs to learn complex patterns.
Biological Inspiration
ANNs draw inspiration from biological neural networks in the following ways:
Structure and Connectivity: Similar to how neurons in the brain are connected in networks, ANN
neurons are connected in layers and pass information from one layer to the next.
Learning Process: Just like the brain strengthens or weakens neural connections based on learning,
ANNs adjust weights through a process called backpropagation during training, allowing the model to
"learn" from errors and improve performance.
Parallel Processing: Biological neurons process information in parallel, allowing complex information
processing. Similarly, ANNs process data across many nodes in parallel, enabling them to handle large
datasets.
Q2 Explain the Hebbian learning rule and its significance in neural networks.
Ans Hebbian Learning Rule, also known as Hebb Learning Rule, was proposed by Donald O Hebb. It is one of the first
and also easiest learning rules in the neural network. It is used for pattern classification. It is a single layer neural network,
i.e. it has one input layer and one output layer. The input layer can have many units, say n. The output layer only has one
unit. Hebbian rule works by updating the weights between neurons in the neural network for each training sample.
3. Set activations for input units with the input vector Xi = Si for i = 1 to n.
4. Set the corresponding output value to the output neuron, i.e. y = t.
5. Update weight and bias by applying Hebb rule for all i = 1 to n:
Q3 What is the perceptron learning rule, and how does it adjust weights during training?
Ans
What is Perceptron?
Perceptron is a type of neural network that performs binary classification that maps input features to an output
decision, usually classifying data into one of two categories, such as 0 or 1.
Perceptron consists of a single layer of input nodes that are fully connected to a layer of output nodes. It is
particularly good at learning linearly separable patterns. It utilizes a variation of artificial neurons
called Threshold Logic Units (TLU), which were first introduced by McCulloch and Walter Pitts in the 1940s.
This foundational model has played a crucial role in the development of more advanced neural networks and
machine learning algorithms.
Types of Perceptron
1. Single-Layer Perceptron is a type of perceptron is limited to learning linearly separable patterns. It is
effective for tasks where the data can be divided into distinct categories through a straight line. While
powerful in its simplicity, it struggles with more complex problems where the relationship between
inputs and outputs is non-linear.
2. Multi-Layer Perceptron possess enhanced processing capabilities as they consist of two or more
layers, adept at handling more complex patterns and relationships within the data.
Basic Components of Perceptron
A Perceptron is composed of key components that work together to process information and make predictions.
Input Features: The perceptron takes multiple input features, each representing a characteristic of the
input data.
Weights: Each input feature is assigned a weight that determines its influence on the output. These
weights are adjusted during training to find the optimal values.
Summation Function: The perceptron calculates the weighted sum of its inputs, combining them with
their respective weights.
Activation Function: The weighted sum is passed through the Heaviside step function, comparing it
to a threshold to produce a binary output (0 or 1).
Output: The final output is determined by the activation function, often used for binary
classification tasks.
Bias: The bias term helps the perceptron make adjustments independent of the input, improving its
flexibility in learning.
Learning Algorithm: The perceptron adjusts its weights and bias using a learning algorithm, such as
the Perceptron Learning Rule, to minimize prediction errors.
These components enable the perceptron to learn from data and make predictions. While a single perceptron
can handle simple binary classification, complex tasks require multiple perceptrons organized into layers,
forming a neural network.
How does Perceptron work?
A weight is assigned to each input node of a perceptron, indicating the importance of that input in determining
the output. The Perceptron’s output is calculated as a weighted sum of the inputs, which is then passed through
an activation function to decide whether the Perceptron will fire.
The weighted sum is computed as:
z=w1x1+w2x2+…+wnxn=XTWz=w1x1+w2x2+…+wnxn=XTW
The step function compares this weighted sum to a threshold. If the input is larger than the threshold value, the
output is 1; otherwise, it’s 0. This is the most common activation function used in Perceptrons are represented
by the Heaviside step function:
h(z)={0if z<Threshold1if z≥Thresholdh(z)={01if z<Thresholdif z≥Threshold
A perceptron consists of a single layer of Threshold Logic Units (TLU), with each TLU fully connected to all
input nodes.
Q4 Explain the concept of adaptive weights in Adaline.
Ans
1. Adaline (Adaptive Linear Neural) :
A network with a single linear unit is called Adaline (Adaptive Linear Neural). A unit with a linear
activation function is called a linear unit. In Adaline, there is only one output unit and output values are
bipolar (+1,-1). Weights between the input unit and output unit are adjustable. It uses the delta rule
Adaline
First, calculate the net input to your Adaline network then apply the activation function to its output then
compare it with the original output if both the equal, then give the output else send an error back to the network
and update the weight according to the error which is calculated by the delta learning rule.