Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
7 views

Module 3

The document provides an overview of Artificial Intelligence (AI), Machine Learning (ML), and Neural Networks (NN), detailing their definitions, components, and applications. It explains the different types of machine learning, including supervised, unsupervised, semi-supervised, and reinforcement learning, along with their respective use cases. Additionally, it discusses the architecture and functioning of neural networks, including the McCulloch-Pitts model, and highlights their importance in various fields such as computer vision, speech recognition, and natural language processing.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Module 3

The document provides an overview of Artificial Intelligence (AI), Machine Learning (ML), and Neural Networks (NN), detailing their definitions, components, and applications. It explains the different types of machine learning, including supervised, unsupervised, semi-supervised, and reinforcement learning, along with their respective use cases. Additionally, it discusses the architecture and functioning of neural networks, including the McCulloch-Pitts model, and highlights their importance in various fields such as computer vision, speech recognition, and natural language processing.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 97

Class: TY B.

Tech Semester: VI

Course Name: Artificial Intelligence


and Neural Network (CMCR0604)

Lab Name: Artificial Intelligence and


Neural Network Lab (CMLR0604)
This Photo by Unknown Author is licensed under CC BY
This Photo by Unknown Author is licensed under CC BY-SA
Artificial Intelligence (AI)
• Artificial Intelligence is a broad field in computer science focused on creating
systems that can perform tasks that normally require human intelligence. These
tasks include problem-solving, learning, reasoning, perception, language
understanding, and decision-making.
• AI Components:
•Knowledge Representation: How information about the world is represented.
•Reasoning: How AI systems draw conclusions or make decisions.
•Learning: How AI systems improve from experience (which is where Machine
Learning fits in).

This Photo by Unknown Author is licensed under CC BY-ND


Machine Learning(ML)
• Machine Learning is a subset of AI that allows systems to learn from data and
improve performance over time without being explicitly programmed. Instead of
relying on pre-defined rules or instructions, machine learning models use data
patterns and statistical techniques to make decisions and predictions.
• The goal of ML is to enable machines to learn from experience (data) and
generalize this knowledge to make accurate predictions on new, unseen data.
• ML is a major technique used to build AI systems. While traditional AI systems
may rely heavily on logic and human expertise, machine learning allows AI
systems to evolve and adapt through data, making them more flexible and powerful.

This Photo by Unknown Author is


licensed under CC BY
Types of Machine Learning
Supervised Learning Unsupervised Learning 3 Semi Supervised Learning
1 2
Algorithms learn from Algorithms learn from labeled and

labeled data to predict future Algorithms discover patterns unlabeled data, effectively utilizing

outcomes. and structures in unlabeled limited labeled data to predict


data. outcome.

4 Reinforcement Learning
Algorithms learn through trial and error, optimizing actions based on
rewards and penalties.
Supervised Learning: Learning from Labels
Definition Examples

Supervised learning trains models on labeled datasets, allowing them Image classification, spam detection, and fraud detection are common
to predict outputs based on input features. It's akin to a teacher applications of supervised learning.
guiding a student with examples and answers.
Supervised Learning: Learning from Labels
Definition Examples

Supervised learning trains models on labeled datasets, allowing them Image classification, spam detection, and fraud detection are common
to predict outputs based on input features. It's akin to a teacher applications of supervised learning.
guiding a student with examples and answers.
Unsupervised Learning:
Discovering Hidden Patterns
1 No Labels 2 Applications
Unsupervised learning operates Clustering, anomaly detection,
on unlabeled datasets, seeking to and dimensionality reduction are
identify patterns and structures examples of unsupervised
within the data. learning tasks.
Semi-Supervised Learning:
Bridging the Gap
Combining Strengths Real-world Benefits
Semi-supervised learning leverages This approach is especially
both labeled and unlabeled data, valuable when labeled data is
effectively utilizing limited labeled scarce, making it practical for
data to improve model many real-world applications.
performance.
Reinforcement Learning: Learning Through Interactions

Trial and Error Applications


Reinforcement learning trains agents to learn through trial and error, This paradigm is ideal for tasks involving control, robotics, game
making decisions based on maximizing rewards and minimizing playing, and optimization problems.
penalties.
Supervised Learning: A Closer Look

1 Regression
Predicts continuous values, such as stock prices or
house prices.

2 Classification
Categorizes data into discrete classes, such as spam
detection or image recognition.

3 Deep Learning
Utilizes artificial neural networks with multiple layers
to extract complex patterns from data.
Unsupervised Learning: Key
Algorithms
Clustering
Groups similar data points together based on their
characteristics.

Dimensionality Reduction
Simplifies data by reducing the number of features
while preserving essential information.

Association Rule Mining


Discovers relationships and dependencies between data
elements.
Introduction to Neural Network (NN)

 A neural network is a method in artificial intelligence (AI) that teaches computers


to process data in a way that is inspired by the human brain.
 It is a type of machine learning (ML) process, called deep learning, that uses
interconnected nodes or neurons in a layered structure that resembles the human
brain.
 It creates an adaptive system that computers use to learn from their mistakes and
improve continuously.
 Thus, artificial neural networks attempt to solve complicated problems, like
summarizing documents or recognizing faces, with greater accuracy.
AI, ML and NN connection
 AI is the overarching goal of creating intelligent systems, capable of performing
tasks that typically require human-level cognitive abilities. AI can be achieved
through various approaches, one of which is Machine Learning.
 Machine Learning is a method of achieving AI by enabling systems to learn from
data and improve over time without explicit programming. ML is often the method
of choice for building intelligent systems, as it allows machines to adapt to new
data and experiences.
 Neural Networks are a powerful technique within ML, often used when the
complexity of the task requires modeling complex relationships, patterns, or
unstructured data. Neural networks enable deep learning, which has become the
foundation of many advanced AI applications, including self-driving cars, facial
recognition, language translation, etc.
Scenario: Self-Driving Cars (AI Application)
• Artificial Intelligence (AI): The goal is to develop a self-driving car that can
navigate and make decisions like a human driver, perceiving the environment and
taking actions.
• Machine Learning (ML): To achieve this, the self-driving car uses machine
learning to learn from vast amounts of driving data (sensor data, images, and
videos). The system uses this data to train models that allow the car to identify
objects like pedestrians, other cars, and traffic signs.
• Neural Networks (NN): Neural networks, specifically Convolutional Neural
Networks (CNNs), are used for image recognition tasks. The neural network
processes data from cameras and LiDAR sensors to identify objects in the car's
environment, like detecting pedestrians on the road. Additionally, Recurrent
Neural Networks (RNNs) may be used for handling sequential data, such as
predicting future actions based on the car's current trajectory and past movements.
Introduction to Neural Network (NN)

 A neural network is a method in artificial intelligence (AI) that teaches computers


to process data in a way that is inspired by the human brain.
 It is a type of machine learning (ML) process, called deep learning, that uses
interconnected nodes or neurons in a layered structure that resembles the human
brain.
 It creates an adaptive system that computers use to learn from their mistakes and
improve continuously.
 Thus, artificial neural networks attempt to solve complicated problems, like
summarizing documents or recognizing faces, with greater accuracy.
Importance of Neural Network (NN)
• Neural networks can help computers make intelligent decisions with limited human
assistance.
• This is because they can learn and model the relationships between input and output
data that are nonlinear and complex.
• NN can make generalizations and inferences
• Neural networks can comprehend unstructured data and make general observations
without explicit training.
• For instance, they can recognize that two different input sentences have a similar
meaning
• Can you tell me how to make the payment?
• How do I transfer money?
• A neural network would know that both sentences mean the same thing.
Use cases of Neural Network (NN)

Neural networks have several use cases across many industries, such as the following:
• Medical diagnosis by medical image classification
• Targeted marketing by social network filtering and behavioral data analysis
• Financial predictions by processing historical data of financial instruments
• Electrical load and energy demand forecasting
• Process and quality control
• Chemical compound identification
Applications of Neural Network (NN)

Computer vision
Computer vision is the ability of computers to extract information and insights from images
and videos. With neural networks, computers can distinguish and recognize images similar to
humans. Computer vision has several applications, such as the following:
• Visual recognition in self-driving cars so they can recognize road signs and other road users
• Content moderation to automatically remove unsafe or inappropriate content from image
and video archives
• Facial recognition to identify faces and recognize attributes like open eyes, glasses, and
facial hair
• Image labeling to identify brand logos, clothing, safety gear, and other image details
Applications of Neural Network (NN)

Speech recognition
Neural networks can analyze human speech despite varying speech patterns, pitch, tone,
language, and accent. Virtual assistants like Amazon Alexa and automatic transcription
software use speech recognition to do tasks like these:
• Assist call center agents and automatically classify calls
• Convert clinical conversations into documentation in real time
• Accurately subtitle videos and meeting recordings for wider content reach
Applications of Neural Network (NN)

Natural language processing


Natural language processing (NLP) is the ability to process natural, human-created text.
Neural networks help computers gather insights and meaning from text data and
documents. NLP has several use cases, including in these functions:
• Automated virtual agents and chatbots
• Automatic organization and classification of written data
• Business intelligence analysis of long-form documents like emails and forms
• Indexing of key phrases that indicate sentiment, like positive and negative comments on
social media
• Document summarization and article generation for a given topic
Applications of Neural Network (NN)

Recommendation engines
Recommendation engines powered by neural networks have become a crucial part of many
services we use today. They help personalize content and suggest products based on a user's
preferences, behaviors, and past interactions.
Example: Movie Recommendation Engine (Netflix), E-commerce Recommendation Engine
(Amazon), Music Recommendation Engine (Spotify)
Working of Neural Network (NN)
• The human brain is the inspiration behind neural network architecture.
• Human brain cells, called neurons, form a complex, highly interconnected network and send
electrical signals to each other to help humans process information.
• Similarly, an artificial neural network is made of artificial neurons that work together to solve
a problem.
• Artificial neurons are software modules, called nodes, and artificial neural networks are
software programs or algorithms that, at their core, use computing systems to solve
mathematical calculations.
Simple Neural Network architecture
Input Layer
Information from the outside world enters the artificial neural network from the input layer. Input nodes process
the data, analyze or categorize it, and pass it on to the next layer.
Hidden Layer
Hidden layers take their input from the input layer or other hidden layers. Artificial neural networks can have a
large number of hidden layers. Each hidden layer analyzes the output from the previous layer, processes it
further, and passes it on to the next layer.
Output Layer
The output layer gives the final result of all the data processing by the artificial neural network. It can have single
or multiple nodes. For instance, if we have a binary (yes/no) classification problem, the output layer will have
one output node, which will give the result as 1 or 0. However, if we have a multi-class classification problem,
the output layer might consist of more than one output node.
How do our brains work?
 A processing element

Dendrites: Input
Cell body: Processor
Synaptic: Link
Axon: Output
How do our brains work?
 A processing element

A neuron is connected to other neurons through about 10,000


synapses
How do our brains work?
 A processing element

A neuron receives input from other neurons. Inputs are combined.


How do our brains work?
 A processing element

Once input exceeds a critical level, the neuron discharges a spike ‐


an electrical pulse that travels from the body, down the axon, to
the next neuron(s)
How do our brains work?
 A processing element

The axon endings almost touch the dendrites or cell body of the
next neuron.
How do our brains work?
 A processing element

Transmission of an electrical signal from one neuron to the next is


effected by neurotransmitters.
How do our brains work?
 A processing element

Neurotransmitters are chemicals which are released from the first neuron
and which bind to the
Second.
How do our brains work?
 A processing element

This link is called a synapse. The strength of the signal that


reaches the next neuron depends on factors such as the amount of
neurotransmitter available.
How do ANNs work?

An artificial neuron is an imitation of a human neuron


How do ANNs work?
• Now, let us have a look at the model of an artificial neuron.
How do ANNs work?
............
Input xm x2 x1

Processing ∑
∑= X1+X2 + ….+Xm =y

Output y
How do ANNs work?
Not all inputs are equal
............
xm x2 x1
Input
wm ..... w2 w1
weights

Processing ∑ ∑= X1w1+X2w2 + ….+Xmwm


=y

Output y
How do ANNs work?
The signal is not passed down to the
next neuron verbatim
xm ........... x2 x1
.
Input
wm .... w2 w1
weights .

Processing ∑
Transfer Function
(Activation Function) f(vk)

Output y
The output is a function of the input, that is affected by
the weights, and the transfer functions
McCulloch-Pitts (M-P) Neuron Model

• The McCulloch-Pitts neuron is a simple and first computational model of a


neuron model of a biological neuron that was first proposed by Warren
McCulloch and Walter Pitts in 1943.

• It is a binary neuron (activation function is binary), meaning that it can only have
two states: on or off

• It can be divided into two parts:


• Aggregation: The neuron aggregates multiple boolean inputs (0 or 1).
• Threshold Decision: Based on the aggregated value, the neuron makes a
decision using a threshold function.

• Weights associated with link can be Excitatory (positive) or inhibitory (negative )

• Mostly used in logic functions


McCulloch-Pitts Model of Neuron

The McCulloch-Pitts neuron has three components:


• Inputs: The inputs are the signals that the neuron
receives from other neurons.
• Threshold: The threshold is the value that the
weighted sum of the inputs must exceed in order for
the neuron to fire.
• Output: The output is the signal that the neuron sends
to other neurons.
McCulloch-Pitts Model of Neuron
McCulloch-Pitts Model of Neuron

The McCulloch-Pitts neuron works as follows:

1. The inputs are multiplied by their corresponding


weights.

2. The weighted sums of the inputs are summed.

3. If the summed value is greater than or equal to the


threshold, the neuron fires and outputs a 1.
Otherwise, the neuron does not fire and outputs a 0.
McCulloch-Pitts Model of Neuron

The McCulloch-Pitts neuron additional things


• The McCulloch-Pitts neuron is a binary neuron,
which means that it cannot represent real-valued
data.

• The McCulloch-Pitts neuron is not a very accurate


model of biological neurons. Biological neurons
have a variety of features that are not captured by the
McCulloch-Pitts neuron, such as the ability to
integrate inputs over time.

• The McCulloch-Pitts neuron is not very powerful


and cannot be used to solve complex problems.
More complex neural network models are needed to
solve complex problems
McCulloch-Pitts Model of Neuron

Overall, the McCulloch-Pitts neuron is a simple and


easy-to-understand model of a biological neuron.

It has been used to simulate the behavior of neural


networks and to develop artificial neural networks.

However, it is not a very accurate model of biological


neurons and is not very powerful
McCulloch-Pitts Model of Neuron: AND function

Truth table
McCulloch-Pitts Model of Neuron: AND function

Truth table

No particular training algorithm only analysis Assuming w1=1 and w2=1


McCulloch-Pitts Model of Neuron: AND function

Truth table

No particular training algorithm only analysis Assuming w1=1 and w2=1


McCulloch-Pitts Model of Neuron: AND function

Truth table

No particular training algorithm only analysis Assuming w1=1 and w2=1


McCulloch-Pitts Model of Neuron: AND function

Truth table
Assuming w1=1 and w2=1

Threshold
Calculation
McCulloch-Pitts Model of Neuron: AND function

Truth table
Assuming w1=1 and w2=1
McCulloch-Pitts Model of Neuron: OR function

Truth table
McCulloch-Pitts Model of Neuron: OR function
McCulloch-Pitts Model of Neuron: OR function

Threshold to 0.5

o For A=0,B=0A = 0, B = 0, the sum is 0, which is less than 0.5, so the output is 0 (correct).
o For A=0,B=1A = 0, B = 1, the sum is 1, which is greater than or equal to 0.5, so the output is 1 (correct).
o For A=1,B=0A = 1, B = 0, the sum is 1, which is greater than or equal to 0.5, so the output is 1 (correct).
o For A=1,B=1A = 1, B = 1, the sum is 2, which is greater than or equal to 0.5, so the output is 1 (correct).
Not possible to fire neuron for (1,0) so weights are not suitable
Consider w1=1 and w2=-1
Activation functions
• Crucial components in artificial neural networks because they determine the output of a neuron given an input or a weighted sum
of inputs.
• Different activation functions are used depending on the task, such as classification or regression, and the desired properties of the
network, such as non-linearity, differentiability, and output range.
Activation functions
Activation functions
Activation functions

Calculate activation for


Activation functions

Calculate activation for


Activation functions

Calculate activation for


Activation functions
Activation functions
Activation functions
Practice
Practice
Practice
Practice
Practice
Practice
MP model for X-OR gate
MP model for X-OR gate
MP model for X-OR gate
MP model for X-OR gate
Perceptron
MP model Perceptron
MP model Perceptron

• The weights (same) and thresholds are fixed and do not • Could learn to classify input data into categories by adjusting
adjust based on input or output (no learning). its weights based on feedback (learning rule)
• Inputs and outputs are binary (0 or 1). • The inputs could be continuous values, not just binary. The
• could model simple logical functions (AND, OR, etc.) output, however, was still binary (0 or 1).
• Provided a theoretical foundation for understanding how • designed for pattern recognition tasks.
neurons could perform computations, but it was not • Introduced a practical, trainable model that could be applied to
practical for solving real-world problems. simple classification tasks.
Neural Network architectures

You might also like