Module 3
Module 3
Tech Semester: VI
labeled data to predict future Algorithms discover patterns unlabeled data, effectively utilizing
4 Reinforcement Learning
Algorithms learn through trial and error, optimizing actions based on
rewards and penalties.
Supervised Learning: Learning from Labels
Definition Examples
Supervised learning trains models on labeled datasets, allowing them Image classification, spam detection, and fraud detection are common
to predict outputs based on input features. It's akin to a teacher applications of supervised learning.
guiding a student with examples and answers.
Supervised Learning: Learning from Labels
Definition Examples
Supervised learning trains models on labeled datasets, allowing them Image classification, spam detection, and fraud detection are common
to predict outputs based on input features. It's akin to a teacher applications of supervised learning.
guiding a student with examples and answers.
Unsupervised Learning:
Discovering Hidden Patterns
1 No Labels 2 Applications
Unsupervised learning operates Clustering, anomaly detection,
on unlabeled datasets, seeking to and dimensionality reduction are
identify patterns and structures examples of unsupervised
within the data. learning tasks.
Semi-Supervised Learning:
Bridging the Gap
Combining Strengths Real-world Benefits
Semi-supervised learning leverages This approach is especially
both labeled and unlabeled data, valuable when labeled data is
effectively utilizing limited labeled scarce, making it practical for
data to improve model many real-world applications.
performance.
Reinforcement Learning: Learning Through Interactions
1 Regression
Predicts continuous values, such as stock prices or
house prices.
2 Classification
Categorizes data into discrete classes, such as spam
detection or image recognition.
3 Deep Learning
Utilizes artificial neural networks with multiple layers
to extract complex patterns from data.
Unsupervised Learning: Key
Algorithms
Clustering
Groups similar data points together based on their
characteristics.
Dimensionality Reduction
Simplifies data by reducing the number of features
while preserving essential information.
Neural networks have several use cases across many industries, such as the following:
• Medical diagnosis by medical image classification
• Targeted marketing by social network filtering and behavioral data analysis
• Financial predictions by processing historical data of financial instruments
• Electrical load and energy demand forecasting
• Process and quality control
• Chemical compound identification
Applications of Neural Network (NN)
Computer vision
Computer vision is the ability of computers to extract information and insights from images
and videos. With neural networks, computers can distinguish and recognize images similar to
humans. Computer vision has several applications, such as the following:
• Visual recognition in self-driving cars so they can recognize road signs and other road users
• Content moderation to automatically remove unsafe or inappropriate content from image
and video archives
• Facial recognition to identify faces and recognize attributes like open eyes, glasses, and
facial hair
• Image labeling to identify brand logos, clothing, safety gear, and other image details
Applications of Neural Network (NN)
Speech recognition
Neural networks can analyze human speech despite varying speech patterns, pitch, tone,
language, and accent. Virtual assistants like Amazon Alexa and automatic transcription
software use speech recognition to do tasks like these:
• Assist call center agents and automatically classify calls
• Convert clinical conversations into documentation in real time
• Accurately subtitle videos and meeting recordings for wider content reach
Applications of Neural Network (NN)
Recommendation engines
Recommendation engines powered by neural networks have become a crucial part of many
services we use today. They help personalize content and suggest products based on a user's
preferences, behaviors, and past interactions.
Example: Movie Recommendation Engine (Netflix), E-commerce Recommendation Engine
(Amazon), Music Recommendation Engine (Spotify)
Working of Neural Network (NN)
• The human brain is the inspiration behind neural network architecture.
• Human brain cells, called neurons, form a complex, highly interconnected network and send
electrical signals to each other to help humans process information.
• Similarly, an artificial neural network is made of artificial neurons that work together to solve
a problem.
• Artificial neurons are software modules, called nodes, and artificial neural networks are
software programs or algorithms that, at their core, use computing systems to solve
mathematical calculations.
Simple Neural Network architecture
Input Layer
Information from the outside world enters the artificial neural network from the input layer. Input nodes process
the data, analyze or categorize it, and pass it on to the next layer.
Hidden Layer
Hidden layers take their input from the input layer or other hidden layers. Artificial neural networks can have a
large number of hidden layers. Each hidden layer analyzes the output from the previous layer, processes it
further, and passes it on to the next layer.
Output Layer
The output layer gives the final result of all the data processing by the artificial neural network. It can have single
or multiple nodes. For instance, if we have a binary (yes/no) classification problem, the output layer will have
one output node, which will give the result as 1 or 0. However, if we have a multi-class classification problem,
the output layer might consist of more than one output node.
How do our brains work?
A processing element
Dendrites: Input
Cell body: Processor
Synaptic: Link
Axon: Output
How do our brains work?
A processing element
The axon endings almost touch the dendrites or cell body of the
next neuron.
How do our brains work?
A processing element
Neurotransmitters are chemicals which are released from the first neuron
and which bind to the
Second.
How do our brains work?
A processing element
Processing ∑
∑= X1+X2 + ….+Xm =y
Output y
How do ANNs work?
Not all inputs are equal
............
xm x2 x1
Input
wm ..... w2 w1
weights
Output y
How do ANNs work?
The signal is not passed down to the
next neuron verbatim
xm ........... x2 x1
.
Input
wm .... w2 w1
weights .
Processing ∑
Transfer Function
(Activation Function) f(vk)
Output y
The output is a function of the input, that is affected by
the weights, and the transfer functions
McCulloch-Pitts (M-P) Neuron Model
• It is a binary neuron (activation function is binary), meaning that it can only have
two states: on or off
Truth table
McCulloch-Pitts Model of Neuron: AND function
Truth table
Truth table
Truth table
Truth table
Assuming w1=1 and w2=1
Threshold
Calculation
McCulloch-Pitts Model of Neuron: AND function
Truth table
Assuming w1=1 and w2=1
McCulloch-Pitts Model of Neuron: OR function
Truth table
McCulloch-Pitts Model of Neuron: OR function
McCulloch-Pitts Model of Neuron: OR function
Threshold to 0.5
o For A=0,B=0A = 0, B = 0, the sum is 0, which is less than 0.5, so the output is 0 (correct).
o For A=0,B=1A = 0, B = 1, the sum is 1, which is greater than or equal to 0.5, so the output is 1 (correct).
o For A=1,B=0A = 1, B = 0, the sum is 1, which is greater than or equal to 0.5, so the output is 1 (correct).
o For A=1,B=1A = 1, B = 1, the sum is 2, which is greater than or equal to 0.5, so the output is 1 (correct).
Not possible to fire neuron for (1,0) so weights are not suitable
Consider w1=1 and w2=-1
Activation functions
• Crucial components in artificial neural networks because they determine the output of a neuron given an input or a weighted sum
of inputs.
• Different activation functions are used depending on the task, such as classification or regression, and the desired properties of the
network, such as non-linearity, differentiability, and output range.
Activation functions
Activation functions
Activation functions
• The weights (same) and thresholds are fixed and do not • Could learn to classify input data into categories by adjusting
adjust based on input or output (no learning). its weights based on feedback (learning rule)
• Inputs and outputs are binary (0 or 1). • The inputs could be continuous values, not just binary. The
• could model simple logical functions (AND, OR, etc.) output, however, was still binary (0 or 1).
• Provided a theoretical foundation for understanding how • designed for pattern recognition tasks.
neurons could perform computations, but it was not • Introduced a practical, trainable model that could be applied to
practical for solving real-world problems. simple classification tasks.
Neural Network architectures