Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit-1 MLT

Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

Subject Coordinator : Rashika Bangroo

Contents
 Syllabus
 Text Books
 Course Outcomes
 Unit-1
Syllabus
Text Books
1) Tom M. Mitchell, ―Machine Learning, McGraw-Hill
Education (India) Private Limited, 2013.
2) Ethem Alpaydin, ―Introduction to Machine Learning
(Adaptive Computation and Machine Learning), The
MIT Press 2004.
3) Stephen Marsland, ―Machine Learning: An
Algorithmic Perspective, CRC Press, 2009.
4) Bishop, C., Pattern Recognition and Machine
Learning. Berlin: Springer-Verlag
Course Outcomes
 “Artificial Intelligence, deep learning,
machine learning — whatever you’re doing if
you don’t understand it — learn it. Because
otherwise, you’re going to be a dinosaur
within 3 years.”
- Mark Cuban, American entrepreneur
What is Machine Learning?
What is Machine Learning?
 Artificial Intelligence is the concept of creating smart
intelligent machines.
 Machine Learning is a subset of artificial intelligence
that helps you build AI-driven applications.
 Deep Learning is a subset of machine learning that
uses vast volumes of data and complex algorithms to
train a model.
 Let’s have a look at an example of an AI-driven product
- Amazon Echo :
How Does Machine Learning Work?
 Machine learning accesses vast amounts of data (both
structured and unstructured) and learns from it to
predict the future. It learns from the data by using
multiple algorithms and techniques.
ML Applications & Examples
1) Social Media Features : E.g : Facebook notices &
records your activities, chats, likes, and comments,
and the time you spend on specific kinds of
posts. ML learns from your own experience & makes
friends & page suggestions for your profile.
2) Product Recommendations Using ML & AI,
websites track your behavior based on your previous
purchases, searching patterns, and cart history, and
then make product recommendations.
3) Image Recognition : Image recognition is an approach
for cataloging and detecting a feature or an object in the
digital image.
4) Sentiment Analysis : It is a real-time ML application
that determines the emotion or opinion of the speaker
or the writer. For e.g, if someone has written a review
or email (or any form of a document), a sentiment
analyzer will instantly find out the actual thought and
tone of the text.
5) Language Translation : One of the most common
ML applications is language translation. It plays a
significant role in the translation of one language to
another. E.g: websites can translate from one language
to another effortlessly.
6) Medical diagnosis :
Real-world examples for medical diagnosis:
 Assisting in formulating a diagnosis or recommends a
treatment option
 Oncology & pathology use ML to recognise cancerous
tissue
Types of ML
1) Supervised Learning
 In supervised learning technique, we train the
machines using the "labelled" dataset, and based on
the training, the machine predicts the output
 The labelled data specifies that some of the inputs are
already mapped to the output.
 First, we train the machine with the input &
corresponding output, and then we ask the machine to
predict the output using the test dataset.
Supervised Learning Techniques
Supervised machine learning can be classified into two
types of problems, which are given below:
1) Classification A classification problem is when the
output variable is a category, such as “Red” or “blue” ,
“disease” or “no disease”.
Some popular classification algorithms are given below:
 Random Forest Algorithm
 Decision Tree Algorithm (ID3)
 Naïve Bayes Algorithm
 Support Vector Machine Algorithm
Supervised Learning
2) Regression A regression problem is when the output
variable is a real value, such as “dollars” or “weight”.
Techniques of Regression :

 Simple Linear Regression Algorithm


 Multivariate Regression Algorithm
Supervised Learning Example
 Suppose you are given a basket filled with different
kinds of fruits.
Supervised Learning Example
 Now the first step is to train the machine with all the
different fruits one by one like this:
 If the shape of the object is rounded and has a
depression at the top, is red in color, then it will be
labeled as –Apple.
 If the shape of the object is a long curving cylinder
having Green-Yellow color, then it will be labeled as –
Banana.
 Now suppose after training the data, you have given a
new separate fruit, say Banana from the basket, and
asked to identify it.
Supervised Learning Example
 Since the machine has already learned the things from
previous data and this time has to use it wisely. It will
first classify the fruit with its shape and color and
would confirm the fruit name as BANANA and put it
in the Banana category. Thus the machine learns the
things from training data(basket containing fruits)
and then applies the knowledge to test data(new
fruit).
2) Unsupervised Learning
 The main aim of the unsupervised learning algorithm is to group
or categories the unsorted dataset according to the similarities,
patterns, and differences.
 Unsupervised Learning can be further classified into two types,
which are given below:
1) Clustering : Some of the popular clustering algorithms are
given below:
 K-Means Clustering algorithm
 Mean-shift algorithm
 DBSCAN Algorithm
 Principal Component Analysis

2) Association Rule Mining


Unsupervised Learning Example
 Suppose machine is given an image having both dogs
and cats which it has never seen.

Unsupervised Learning Example
 Machine categorizes them according to their
similarities, patterns, and differences, i.e., we can
easily categorize the above picture into two parts.
 First may contain all pics having dogs in them and
 second part may contain all pics having cats in them.
 Here you didn’t learn anything before, which means no
training data or examples.
Quiz
 Identify whether given scenarios uses Supervised or
Unsupervised Learning ?
A) FB Face Recognition
B) Netflix Recommends Movies
C) Analysing Fraud Detection
3) Reinforcement Learning
 Reinforcement Learning is a feedback-based ML
technique in which an agent learns to behave in an
environment by performing the actions and seeing the
results of actions.
 For each good action, the agent gets positive
feedback, and for each bad action, the agent gets
negative feedback or penalty.
 The agent learns automatically using feedbacks
without any labeled data, unlike supervised learning.
 E.g : Game-playing, Robotics, etc.
Reinforcement Learning
 How a Robotic dog learns the movement of his arms is an e.g
of Reinforcement learning.
 Let's take an example of a maze environment that the agent
needs to explore.
 Environment: It can be anything such as a room, maze,
football ground, etc.
 Agent: An intelligent agent such as AI robot.
Well Posed Learning Problem
 A computer program is said to learn from experience E
with respect to some class of tasks T and performance
measure P, if its performance in tasks T, as measured
by P, improves with experience E.
 Any problem can be segregated as well-posed learning
problem if it has three traits –
a) Task
b) Performance Measure
c) Experience
Well Posed Learning Problem E.g
 Certain examples that efficiently defines the well-
posed learning problem are –
 1) A checkers learning problem :
 Task – Playing checkers game
 Performance Measure – % of games won against
opponent
 Experience – playing implementation games against
itself
Well Posed Learning Problem E.g
2) Fruit Prediction Problem
 Task – forecasting different fruits for recognition
 Performance Measure – able to predict maximum variety
of fruits
 Experience – training machine with the largest datasets of
fruits images
3) Face Recognition Problem
 Task – predicting different types of faces
 Performance Measure – able to predict maximum types
of faces
 Experience – training machine with maximum amount of
datasets of different face images
Designing a Learning System
 For any learning system, we must be knowing the three
elements — T (Task), P (Performance Measure), and E
(Training Experience).
 Learning process starts with task T, performance measure P
and training experience E and objective is to find an
unknown target function. The target function is an exact
knowledge to be learned from the training experience and
its unknown.
 For example, in a case of credit approval,
Experience : Customer application records
Task : To classify whether the given customer application is
eligible for a loan.
So in this case, the training examples can be represented as
(x1,y1)(x2,y2)..(xn,yn) where X represents customer
application details and y represents the status of credit
approval.
Designing a Learning System
 Target function to be learned in the credit approval
learning system is a mapping function f:X →y. This
function represents the exact knowledge defining the
relationship between input variable X and output variable
y.
 Next, the learning algorithms try to guess a “hypothesis’’
function h(X) that approximates the unknown f(.). A
hypothesis is a function that best describes the target &
hypothesis set is the collection of all the possible legal
hypothesis. This is the set from which the ML algorithm
would determine the best possible (only one) which would
best describe the target function or the outputs. The goal
of the learning process is to find the final hypothesis
that best approximates the unknown target function.
Designing a Learning System
Designing a Learning System
 We will look into the checkers learning problem and
apply the above design choices. For a checkers learning
problem, the three elements will be,
 1. Task T: To play checkers
2. Performance measure P: Total % of the game won in
the tournament.
3. Training experience E: A set of games played against
itself
1) Choosing Training Experience
a)Direct or Indirect Feedback
 Direct : Individual checkers board states & correct moves
for each
 Indirect : Moves sequences & final outcome of various
games
 Learner faces problem of ‘Credit Assignment’
b) Degree to which learner controls sequence of
training examples
c) How well it represents distribution of e.g’s over which
Final system performance will be measured
E.g –Checkers Game
 Task T: playing checkers
 Performance measure P: percent of games won in the
world tournament
 Training experience E: games played against itself
2) Choosing Target Function
 To determine exactly what type of knowledge will be
learned and how this will be used by the performance
program
 E.g : checkers-playing program that can generate the
legal moves from any board state. The program needs
only to learn how to choose the best move from among
these legal moves.
 chooseMove V : BM where B : Legal Board state
M : set of real no’s
Choosing Target Function
 What should be the value of V for any board state?
a) If b is a final board state that is won then V(b)= 100
b) if b is a final board state that is lost, then V(b) = -100
c) if b is a final board state that is drawn, then V(b) = 0
d) if b is a not a final state in the game, then V(b) =
V(b’), where b' is the best final board state that can
be achieved starting from b and playing optimally
until the end of the game
3) Choosing Representation for
Target Function
 Allow program to represent using a large table with a
distinct entry specifying the value for each distinct board
state.
 To represent using a collection of rules that match against
features of the board state, or
 a quadratic polynomial function of predefined board
features, or
 Artificial neural network
Choosing Representation for
Target Function
 E.g In Checkers Problem, for any given board state,
the function V will be calculated as a linear
combination of the following board features:
Xl: the number of black pieces on the board
X2: the number of red pieces on the board
X3: the number of black kings on the board
X4: the number of red kings on the board
X5: the number of black pieces threatened by red (i.e.,
which can be captured on red's next turn)
X6: the number of red pieces threatened by black
Choosing Representation for
Target Function
 Thus, our learning program will represent V(b) as a
linear function of the form :

where Wo through W6 are numerical coefficients, or


weights, to be chosen by the learning algorithm.
Learned values for the weights Wl through W6 will
determine the relative importance of the various board
features in determining the value of the board
Partial design of checkers learning
program
 Task T: playing checkers
 Performance measure P: % of games won in the world
tournament
 Training experience E: games played against itself
 Target function:
chooseMove V : BM where B : Legal Board state
M : set of real no’s
 Target function representation :
4) Choosing a Function Approximation
Algorithm
 To learn a Target Function f, we need a set of Training
Examples.
 Training Example Representation :
 Ordered Pair= (b, Vtrain(b) )
 E.g : Black won the game
 X2= 0, Vtrain(b) = +100
 b= (X1=3,X2=0,X3=1,X4=0,X5=0,X6=0)
<(X1=3,X2=0,X3=1,X4=0,X5=0,X6=0) +100>
Choosing a Function
Approximation Algorithm
 There are 2 steps in this phase :
a) Estimating the Training Values :
Vtrain(b)  V^(successor (b))
where V^ represents learners current approximation to V
It estimates that this move will help/destroy opponent.
b) Adjusting the weights :
We use LMS (Least Mean Square )
Final Design
Introduction to ML Approaches
1) Artificial Neural Network (ANN)
Artificial Neural Network is a deep learning method that
arose from the concept of the human brain Biological
Neural Networks.
 There are three layers in the network architecture:
Input layer, hidden layer (more than one), and the output
layer.
 Because of the numerous layers its sometimes referred to as
the MLP (Multi-Layer Perceptron).
 Neural Networks learn by eg’s, they cannot be programmed
to perform a specific task
Introduction to ML Approaches
ANN Applications
Artificial Neural Networks E.g
Thank You

You might also like