Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo

1

Machine Learning & Machine Vision.
By
Ashok Kumar

2

Contents
• Description :
• Machine learning uses interdisciplinary techniques such as statistics, linear algebra, optimization, and computer science to
create automated systems that can sift through large volumes of data at high speed to make predictions or decisions
without human intervention. Machine learning as a field is now incredibly pervasive, with applications spanning from
business intelligence to homeland security, from analyzing biochemical interactions to structural monitoring of aging
bridges, and from emissions to astrophysics, etc. This class will familiarize students with a broad cross-section of models
and algorithms for machine learning, and prepare students for research or industry application of machine learning
techniques.
• Overview:
• Based on fundamental knowledge of computer science principles and skills, probability and statistics theory, and the
theory and application of linear algebra. This course provides a broad introduction to machine learning and statistical
pattern recognition.
• Topics include: (1) supervised learning (generative/discriminative learning, parametric/nonparametric learning, neural
networks, and support vector machines); (2) unsupervised learning (clustering, dimensionality reduction, kernel methods);
(3) learning theory (bias/variance tradeoffs; VC theory; large margins); and (4) reinforcement learning and adaptive control.
• Discuss recent applications of machine learning, such as to robotic control, data mining, autonomous navigation,
bioinformatics, speech recognition, and text and web data processing.

3

Main Topics:
1) Introduction: what is ML; Problems, data, and tools; Visualization; Matlab(I).
2) Linear regression; SSE; gradient descent; closed form; normal equations; features
• Overfitting and complexity; training, validation, test data, and introduction to Matlab (II).
3) Probability and classification, Bayes optimal decisions
Naive Bayes and Gaussian class-conditional distribution
4) Linear classifiers
Bayes' Rule and Naive Bayes Model
5) Logistic regression, online gradient descent, Neural Networks
Decision tree
6) Ensemble methods: Bagging, random forests, boosting
A more detailed discussion on Decision Tree and Boosting
7) Unsupervised learning: clustering, k-means, hierarchical agglomeration
8) Advanced discussion on clustering and EM
9) Latent space methods; PCA.
10) Text representations; naive Bayes and multinomial models; clustering and latent space models.
11) VC-dimension, structural risk minimization; margin methods and support vector machines (SVM).
12)Support vector machines and large-margin classifiers.
Time series; Markov models; autoregressive models.

4

Lesson Content: Model Evaluation and Validation
• Lesson Title Learning Outcomes
• TRAINING AND TESTING MODELS ➔ Load data with Pandas, then train and test models with Scikit-learn.
EVALUATION METRICS ➔ Learn about metrics such as accuracy, precision, and recall usedto measure
theperformance of your models.
• EVALUATION AND VALIDATION ➔ Choose the best model using cross-validation and grid search.
Lesson Content: Supervised Learning
Lesson Title Learning Outcomes
LINEAR REGRESSION ➔ Difference between Regression and Classification➔ Learn to predict values with Linear Regression.
PERCEPTRON ALGORITHM ➔ Learn the definition of a perceptron as a building block for neural networks, and the perceptron
algorithm for classification.
LOGISTIC REGRESSION ➔ Learn to predict states using Logistic Regression.
NEURAL NETWORKS ➔ Learn the definition of a Neural Network ➔ Learn to train them using backpropagation ➔ Build a
neural network starting from a single perceptron.
DECISION TREES ➔ Train Decision Trees to predict states ➔ Use Entropy to build decision trees recursively ➔ Random forests.

5

NAIVE BAYES ➔ Learn the Bayes rule, and how to apply it to predicting data using the Naive Bayes algorithm.➔ Train models
using Bayesian Learning ➔ Use Bayesian Inference to create Bayesian Networks of several variables ➔ Bayes NLP Mini-
Project.
SUPPORT VECTOR MACHINES ➔ Learn to train a Support Vector Machine to separate data linearly ➔ Use Kernel Methods
in order to train SVMs on data that is not linearly separable.
ENSEMBLE OF LEARNERS ➔ Enhance traditional algorithms via boosting ➔ AdaBoost.
Lesson Content: Unsupervised Learning
Lesson Title Learning Outcomes
CLUSTERING ➔ Learn the basics of clustering Data ➔ Cluster data with the K-means algorithm
HIERARCHICAL AND DENSITY-BASED CLUSTERING ➔ Cluster data with Single Linkage Clustering➔ Cluster data with
DBSCAN, a clustering method that captures the insight that clusters are dense groups of points.
GAUSSIAN MIXTURE MODELS ➔ Cluster data with Gaussian Mixture Models ➔ Optimize Gaussian Mixture Models
with Expectation Maximization.
FEATURE SCALING ➔ Learn to scale features in your data ➔ Learn to select the best features for training data.
DIMENSIONALITY REDUCTION ➔ Reduce the dimensionality of the data using Principal Component Analysis and
Independent Component Analysis.

6

ADVANCED MACHINE LEARNING
• Lesson Title Learning Outcomes
MACHINE LEARNING TO DEEP LEARNING ➔ The basics of deep learning, including softmax, one-hot encoding, and cross entropy.
➔ Basic linear classification models such as Logistic Regression, and their associated error function.
DEEP NEURAL NETWORKS ➔ Review: What is a Neural Network? ➔ Activation functions, sigmoid, tanh, and ReLus. ➔ How to train a neural
network using backpropagation and the chain rule. ➔ How to improve a neural network using techniques such as regularization and dropout.
CONVOLUTIONAL NEURAL NETWORKS ➔ What is a Convolutional Neural Network? ➔ How CNNs are used in image recognition.
Train a Quadcopter to Fly :
Lesson Content: Reinforcement Learning
Lesson Title Learning Outcomes
WELCOME TO RL ➔ The basics of reinforcement learning and OpenAI Gym.
THE RL FRAMEWORK: THE PROBLEM ➔ Learn how to define Markov Decision Processes to solve real-world problems.
THE RL FRAMEWORK: THE SOLUTION ➔ Learn about policies and value functions. ➔ Derive the Bellman Equations.
DYNAMIC PROGRAMMING ➔ Write your own implementations of iterative policy evaluation, policy improvement, policy Iteration,
and value Iteration.
MONTE CARLO METHODS ➔ Implement classic Monte Carlo prediction and control methods. ➔ Learn about greedy and epsilon-

7

TEMPORAL-DIFFERENCE METHODS ➔ Learn the difference between the Sarsa, Q-Learning, and Expected Sarsa algorithms.
RL IN CONTINUOUS SPACES ➔ Learn how to adapt traditional algorithms to work with continuous spaces.
DEEP Q-LEARNING ➔ Extend value-based reinforcement learning methods to complex problems using deep neural networks.
POLICY GRADIENTS ➔ Policy-based methods try to directly optimize for the optimal policy. Learn how they work, and why they are
important, especially for domains with continuous action spaces.
ACTOR-CRITIC METHODS ➔ Learn how to combine value-based and policy-based methods, bringing together the best of both
worlds, to solve challenging reinforcement learning problems.
Useful Title:
1. Introductory Topics
2. Linear Regression and Feature Selection
3. Linear Classification
4. Support Vector Machines and Artificial Neural Networks
5. Bayesian Learning and Decision Trees
6. Evaluation Measures
7. Hypothesis Testing
8. Ensemble Methods
9. Clustering
10. Graphical Models
11. Learning Theory and Expectation Maximization

8

Introduction to Machine Learning:
• Machine learning is a subfield of computer science that evolved from the study of pattern recognition and
computational learning theory in artificial intelligence.Machine learning explores the construction and study of
algorithms that can learn from and make predictions on data.Such algorithms operate by building a model from
example inputs in order to make data-driven predictions or decisions,rather than following strictly static program
instructions.
Machine learning is closely related to and often overlaps with computational statistics; a discipline that also
specializes in prediction-making. It has strong ties to mathematical optimization, which deliver methods, theory and
application domains to the field. Machine learning is employed in a range of computing tasks where designing and
programming explicit algorithms is infeasible. Example applications include spam filtering optical character
recognition (OCR)
• Ever since computers were invented, we have wondered whether they might be made to learn. If we could
understand how to program them to learn-to improve automatically with experience-the impact would be dramatic.
Imagine computers learning from medical records which treatments are most effective for new diseases, houses
learning from experience to optimize energy costs based on the particular usage patterns of their occupants, or
personal software assistants learning the evolving interests of their users in order to highlight especially relevant
stories from the online morning newspaper. A successful understanding of how to make computers learn would
open up many new uses of computers and new levels of competence and customization. And a detailed
understanding of informationprocessing algorithms for machine learning might lead to a better understanding of
human learning abilities (and disabilities) as well.
• However, algorithms have been invented that are effective for certain types of learning tasks, and a theoretical
understanding of learning is beginning to emerge. Many practical computer programs have been developed to
exhibit useful types of learning, and significant commercial applications have begun to appear. For problems such
as speech recognition, algorithms based on machine learning outperform all other approaches that have been

9

• In the topics of face recognition, face detection, and facial age estimation, machine learning plays an important
role and is served as the fundamental technique in many existing literatures. For example, in face recognition,
many researchers focus on using dimensionality reduction techniques for extracting personal features. The most
well-known ones are eigenfaces [1], which is based on principal component analysis (PCA,) fisherfaces [2],
which is based on linear discriminant analysis (LDA).
• Machine learning techniques are also widely used in facial age estimation to extractthe hardly found features and
to build the mapping from the facial features to the predicted age. Although machine learning is not the only
method in pattern recognition (for example, there are still many researches aiming to extract useful features
through image and video analysis), it could provide some theoretical analysis and practical guidelines to refine
and improve the recognition performance.
• Based on the large amount of available data and the intrinsic ability to learn knowledge from data, we believe
that the machine learning techniques will attract much more attention in pattern recognition, data mining, and
information. retrieval.
• Human designers often produce machines that do not work as well as desired in the environments in which they
are used. In fact, certain characteristics of the working environment might not be completely known at design
time. Machine learning methods can be used for on-the-job improvement of existing machine designs.
• The amount of knowledge available about certain tasks might be too large for explicit encoding by humans.
Machines that learn this knowledge gradually might be able to capture more of it than humans would want to
write down.
• Environments change over time. Machines that can adapt to a changing environment would reduce the need for
constant redesign.
• New knowledge about tasks is constantly being discovered by humans.Vocabulary changes. There is a constant
stream of new events in the world. Continuing redesign of AI systems to conform to new knowledge is impractical,
but machine learning methods might be able to track much of it.

10

What is Machine Learning?
• Learning, like intelligence, covers such a broad range of processes that it is difficult to define precisely.There are
several parallels between animal and machine learning. Certainly, many techniques in machine learning derive from
the efforts of psychologists to make more precise their theories of animal and human learning through
computational
models. It seems likely also that the concepts and techniques being explored by researchers in machine learning
may illuminate certain aspects of biological learning.
A machine learns whenever it changes its structure, program, or data (based on its inputs or in response to external
information) in such a manner that its expected future performance improves. Some of these changes, such as the
addition of a record to a data base, fall comfortably within the province of other disciplines and are not necessarily
better understood for being called learning. But, for example, when the performance of a speech-recognition
machine improves after hearing several samples of a person’s speech, we feel quite justified in that case to say
that the machine has learned.
Machine learning usually refers to the changes in systems that perform tasks associated with artificial intelligence
(AI). Such tasks involve recognition, diagnosis, planning, robot control, prediction, etc. The changes" might be either
enhancements to already performing systems or ab initio synthesis of new systems. To be slightly more specific, we
show the architecture of a typical AI agent".This agent perceives and models its environment and computes
appropriate actions, perhaps by anticipating their effects. Changes made to any of the components shown in the
figure might count as learning. Different learning mechanisms might be employed depending on which subsystem
is being changed.

11

• But there are important engineering reasons as well. Some of these are:
• Some tasks cannot be defined well except by example; that is, we might be able to specify input/output pairs but not
a concise relationship between inputs and desired outputs. We would like machines to be able to adjust their
internal structure to produce correct outputs for a large number of sample inputs and thus suitably constrain their
input/output function to approximate the relationship implicit in the examples.
• It is possible that hidden among large piles of data are important relationships and correlations. Machine
learningmethods can often be used to extract these relationships (data mining).
• In machine learning, data plays an indispensable role, and the learning algorithm is used to discover and learn
knowledge or properties from the data. The quality or quantity of the dataset will affect the learning and prediction
performance.

12

Learning and Tasks in Machine Learning.
• Machine learning tasks are typically classified into three broad categories, depending on the nature of the learning
“signal” or “feedback” available to a learning system.
• Supervised learning: The computer is presented with example inputs and their desired outputs, given by a “teacher”,
and the goal is to learn a general rule that maps inputs to outputs.
• Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its
input.Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end.
• Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain
goal (such as driving a vehicle), without a teacher explicitly telling it whether it has come close to its goal or not. Another
example is learning to play a game by playing against an opponent.
Between supervised and unsupervised learning is semisupervised learning, where the teacher gives an incomplete training
signal: a training set with some (often many) of the target outputs missing. Transduction is a special case of this principle
where the entire set of problem instances is known at learning time, except that part of the targets are missing. Among other
categories of machine learning problems, learning to learn learns its own inductive bias based on previous experience.
Developmental learning, elaborated for robot learning, generates its own sequences (also called curriculum) of learning
situations to cumulatively acquire repertoires of novel skills through autonomous.
self-exploration and social interaction with human teachers, and using guidance mechanisms such as active learning,
maturation, motor synergies, and imitation.
Another categorization of machine learning tasks arises when one considers the desired output of a machinelearned system.
• In classification, inputs are divided into two or more classes, and the learner must produce a model that assigns unseen
inputs to one (or multi-label classification) or more of these classes. This is typically tackled in a supervised way. Spam
filtering is an example of classification, where the inputs are email (or other) messages and the classes are “spam” and

13

• In clustering, a set of inputs is to be divided into groups. Unlike in classification, the groups are not known beforehand,
making this typically an unsupervised task.
• Density estimation finds the distribution of inputs insome space.
• Dimensionality reduction simplifies inputs by mapping them into a lower-dimensional space. Topic modeling is a related
problem, where a program is given a list of human language documents and is tasked to find out which documents cover
similar topics.
Wellsprings of Machine Learning :
Work in machine learning is now converging from several sources. These different traditions each bring different methods and
different vocabulary which are now being assimilated into a more unified discipline.
• Statistics: A long-standing problem in statistics is how best to use samples drawn from unknown probability distributions to
help decide from which distribution some new sample is drawn. A related problem is how to estimate the value of an unknown
function at a new point given the values of this function at a set of sample points. Statistical methods for dealing with these
problems can be considered instances of machine learning because the decision and estimation rules depend on a corpus of
samples drawn from the problem environment.
• Brain Models: Non-linear elements with weighted inputs have been suggested as simple models of biological neurons.
Brain modelers are interested in how closely these networks approximate the learning phenomena of living brains.
several important machine learning techniques are based on networks of nonlinear elements|often called neural networks.
• Adaptive Control Theory: The problem of controlling a process having unknown parameters which must be estimated
during operation.Often, the parameters change during operation, and the control process must track these changes.
Some aspects of controlling a robot based on sensory inputs represent instances of this sort of problem.
• Psychological Models: Psychologists have studied the performance of humans in various learning tasks. An early example
is the EPAM network for storing and retrieving one member of a pair of words.Some of the work in reinforcement learning can
be traced to efforts to model how reward stimuli influence the learning of goal-seeking behavior in animals.

14

• Artificial Intelligence: From the beginning, AI research has been concerned with machine learning. Samuel developed a
prominent early program that learned parameters of a function for evaluating board positions in the game of checkers.
AI researchers have also explored the role of analogies in learning and how future actions and decisions can be based on
previous exemplary cases.Recent work has been directed at discovering rules for expert systems using decision-tree methods
and inductive logic programming .Another theme has been saving and generalizing the results of problem solving using
explanation-based learning.
• Evolutionary Models:
In nature, not only do individual animals learn to perform better, but species evolve to be better fit in their individual niches.
Since the distinction between evolving and learning can be blurred in computer systems, techniques that model certain
aspects of biological evolution have been proposed as learning methods to improve the performance of computer programs.
Genetic algorithms and genetic programming are the most prominent computational technique for evolution.
Varieties of Machine Learning :
Orthogonal to the question of the historical source of any learning technique is the more important question of what is to be
learned. In this book, we take it that the thing to be learned is a computational structure of some sort. A variety of different
computational structures:
• Functions
• Logic programs and rule sets
• Finite-state machines
• Grammars
• Problem solving systems
The change to the existing structure might be simply to make it more computationally efficient rather than to increase the
coverage of the situations it can handle. Much of the terminology that we shall be using throughout the book is best introduced

15

Learning Input-Output Functions

16

• The terminology used in describing the problem of learning a function.
Imagine that there is a function, f, and the task
of the learner is to guess what it is. Our hypothesis about the function to
be learned is denoted by h. Both f and h are functions of a vector-valued
input X = (x1; x2; : : : ; xi; : : : ; xn) which has n components. We think of h
as being
implemented by a device that has X as input and h(X) as output. Both f
and h themselves may be vector-valued. We assume a priori that the
hypothesized function, h, is selected from a class of functions H.
Sometimes we know that f also belongs to this class or to a subset of this
class. We select h based on a training set, Ξ, of m input vector examples.
Types of Learning :
There are two major settings in which we wish to learn a function. In one,
called supervised learning, we know (sometimes only approximately) the
values of f for the m samples in the training set, Ξ. We assume that if we
can find a hypothesis, h, that closely agrees with f for the members of Ξ,
then this hypothesis will be a good guess for f -especially if Ξ is large.
Curve-fitting is a simple example of supervised learning of a function.
Suppose we are given the values of a two-dimensional function, f, at the
four sample points shown by the solid circles.

17

• To fit these four points with a function, h, drawn from the set, H, of second-degree functions. show there a two-
dimensional parabolic surface above the x1, x2 plane that fits the points. This parabolic function, h, is our hypothesis
about the function, f, that produced the four samples. In this case, h = f at the four samples, but we need not have
required exact matches.
In the other setting, termed unsupervised learning, we simply have a training set of vectors without function values for them.
The problem in this case, typically, is to partition the training set into subsets, Ξ1, . . . , ΞR, in some appropriate way.
Unsupervised learning methods have application in taxonomic problems in which it is desired to invent ways to classify data
into meaningful categories.To find a new function, h, or to modify an existing one. An interesting special case is that of
changing an existing function into an equivalent one that is computationally more efficient. This type of learning is sometimes
called speed-up learning. A very simple example of speed-up learning involves deduction processes. From the formulas A ⊃
B and B ⊃ C, deduce C if we are given A. From this deductive process, create the formula A ⊃ C|a new formula but one that
does not sanction any more conclusions than those that could be derived from the formulas that we previously had. derive C
more quickly, given A, contrast speed-up learning with methods that create genuinely new functions|ones that might give
different results after learning than they did before.The latter methods involve inductive learning.As opposed to deduction,
there are no correct inductions|only useful ones.
Input Vectors
Because machine learning methods derive from so many different traditions, its terminology is rife with synonyms, and we will
be using most of them in this book. For example, the input vector is called by a variety of names. Some of these are: input
vector, pattern vector, feature vector, sample, example, and instance. The components, xi, of the input vector are variously
called features, attributes, input variables, and components. The values of the components can be of three main types. They
might be real-valued numbers, discrete-valued numbers, or categorical values. As an example illustrating categorical values,
information about a student might be represented by the values of the attributes class, major, sex, adviser. A particular
student would then be represented by a vector such as: (sophomore, history, male, higgins). Additionally, categorical values
may be ordered (as in f small, medium, largeg) or unordered (as in the example just given). Of course, mixtures of all these
types of values are possible In all cases, it is possible to represent the input in unordered form by listing the names of the
attributes together with their values. The vector form assumes that the attributes are ordered and given implicitly by a form.

18

• An important specialization uses Boolean values, which can be regarded as a special case of either discrete numbers (1,0)
or of categorical variables (True,False).
• Outputs :
• The output may be a real number, in which case the process embodying the function, h, is called a function estimator, and
the output is called an output value or estimate. Alternatively, the output may be a categorical value, in which case the
process embodying h is variously called a classifier, a recognizer, or a categorizer, and the output itself is called a label, a
class, a category, or a decision. Classifiers have application in a number of recognition problems, for example in the
recognition of hand-printed characters. The input in that case is some suitable representation of the printed character, and
the classifier maps this input into one of, say, 64 categories.Vector-valued outputs are also possible with components
being real numbers or categorical values. An important special case is that of Boolean output values. In that case, a
training pattern having value 1 is called a positive instance, and a training sample having value 0 is called a negative
instance. When the input is also Boolean, the classifier implements a Boolean function. We study the Boolean case in
some detail because it allows us to make important general points in a simplified setting. Learning a Boolean function is
sometimes called concept learning, and the function is called a concept.
• Training Regimes :
• There are several ways in which the training set, Ξ, can be used to produce a hypothesized function. In the batch method,
the entire training set is available and used all at once to compute the function, h. A variation of this method uses the
entire training set to modify a current hypothesis iteratively until an acceptable hypothesis is obtained. By contrast, in the
incremental method, we select one member at a time from the training set and use this instance alone to modify a current
hypothesis. Then another member of the training set is selected, and so on. The selection method can be random (with
replace or it can cycle through the training set iteratively. If the entire training set becomes available one member at a time,
then we might also use an incremental method|selecting and using training set members as they arrive. (Alternatively, at
any stage all training set members so far available could be used in a batch" process.) Using the training set members as
they become available is called an online method. Online methods might be used, for example, when the next training
instance is some function of the current hypothesis and the previous instance|as it would be when a classifier is used to

19

• Noise :
• Sometimes the vectors in the training set are corrupted by noise. There are two kinds of noise. Class noise randomly alters
the value of the function; attribute noise randomly alters the values of the components of the input vector. In either case, it
would be inappropriate to insist that the hypothesized function agree precisely with the values of the samples in the
training set.
• Performance Evaluation
• Even though there is no correct answer in inductive learning, it is important to have methods to evaluate the result of
learning. We will discuss this matter in more detail later, but, briefly, in supervised learning the induced function is usually
evaluated on a separate set of inputs and function values for them called the testing set . A hypothesized function is said to
generalize when it guesses well on the testing set. Both mean-squared-error and the total number of errors are common
measures.
• Basic Concepts and Ideals of Machine Learning :
• people are easily facing some decisions to make. For example, if the sky is cloudy, we may decide to bring an umbrella or
not. For a machine to make these kinds of choices, the intuitive way is to model the problem into a mathematical
expression. The mathematical expression could directly be designed from the problem background. For instance, the
vending machine could use the standards and security decorations of currency to detect false money. While in some other
problems that we can only acquire several measurements and the corresponding labels, but do not know the specific
relationship among them, learning will be a better way to find the underlying connection. Another great illustration to
distinguish designing from learning is the image compression technique. JPEG, the most widely used image compression
standard, exploits the block-based DCT to extract the spatial frequencies and then unequally quantizes each frequency
component to achieve data compression. The success of using DCT comes from not only the image properties, but also
the human visual perception. While without counting the side information, the KL transform (Karhunen-Loeve transform),
which learns the best projection basis for a given image, has been proved to best reduce the redundancy [11]. In many
literatures, the knowledge acquired from human understandings or the intrinsic factors of problems are called the domain

More Related Content

Machine learning ppt.

  • 1. Machine Learning & Machine Vision. By Ashok Kumar
  • 2. Contents • Description : • Machine learning uses interdisciplinary techniques such as statistics, linear algebra, optimization, and computer science to create automated systems that can sift through large volumes of data at high speed to make predictions or decisions without human intervention. Machine learning as a field is now incredibly pervasive, with applications spanning from business intelligence to homeland security, from analyzing biochemical interactions to structural monitoring of aging bridges, and from emissions to astrophysics, etc. This class will familiarize students with a broad cross-section of models and algorithms for machine learning, and prepare students for research or industry application of machine learning techniques. • Overview: • Based on fundamental knowledge of computer science principles and skills, probability and statistics theory, and the theory and application of linear algebra. This course provides a broad introduction to machine learning and statistical pattern recognition. • Topics include: (1) supervised learning (generative/discriminative learning, parametric/nonparametric learning, neural networks, and support vector machines); (2) unsupervised learning (clustering, dimensionality reduction, kernel methods); (3) learning theory (bias/variance tradeoffs; VC theory; large margins); and (4) reinforcement learning and adaptive control. • Discuss recent applications of machine learning, such as to robotic control, data mining, autonomous navigation, bioinformatics, speech recognition, and text and web data processing.
  • 3. Main Topics: 1) Introduction: what is ML; Problems, data, and tools; Visualization; Matlab(I). 2) Linear regression; SSE; gradient descent; closed form; normal equations; features • Overfitting and complexity; training, validation, test data, and introduction to Matlab (II). 3) Probability and classification, Bayes optimal decisions Naive Bayes and Gaussian class-conditional distribution 4) Linear classifiers Bayes' Rule and Naive Bayes Model 5) Logistic regression, online gradient descent, Neural Networks Decision tree 6) Ensemble methods: Bagging, random forests, boosting A more detailed discussion on Decision Tree and Boosting 7) Unsupervised learning: clustering, k-means, hierarchical agglomeration 8) Advanced discussion on clustering and EM 9) Latent space methods; PCA. 10) Text representations; naive Bayes and multinomial models; clustering and latent space models. 11) VC-dimension, structural risk minimization; margin methods and support vector machines (SVM). 12)Support vector machines and large-margin classifiers. Time series; Markov models; autoregressive models.
  • 4. Lesson Content: Model Evaluation and Validation • Lesson Title Learning Outcomes • TRAINING AND TESTING MODELS ➔ Load data with Pandas, then train and test models with Scikit-learn. EVALUATION METRICS ➔ Learn about metrics such as accuracy, precision, and recall usedto measure theperformance of your models. • EVALUATION AND VALIDATION ➔ Choose the best model using cross-validation and grid search. Lesson Content: Supervised Learning Lesson Title Learning Outcomes LINEAR REGRESSION ➔ Difference between Regression and Classification➔ Learn to predict values with Linear Regression. PERCEPTRON ALGORITHM ➔ Learn the definition of a perceptron as a building block for neural networks, and the perceptron algorithm for classification. LOGISTIC REGRESSION ➔ Learn to predict states using Logistic Regression. NEURAL NETWORKS ➔ Learn the definition of a Neural Network ➔ Learn to train them using backpropagation ➔ Build a neural network starting from a single perceptron. DECISION TREES ➔ Train Decision Trees to predict states ➔ Use Entropy to build decision trees recursively ➔ Random forests.
  • 5. NAIVE BAYES ➔ Learn the Bayes rule, and how to apply it to predicting data using the Naive Bayes algorithm.➔ Train models using Bayesian Learning ➔ Use Bayesian Inference to create Bayesian Networks of several variables ➔ Bayes NLP Mini- Project. SUPPORT VECTOR MACHINES ➔ Learn to train a Support Vector Machine to separate data linearly ➔ Use Kernel Methods in order to train SVMs on data that is not linearly separable. ENSEMBLE OF LEARNERS ➔ Enhance traditional algorithms via boosting ➔ AdaBoost. Lesson Content: Unsupervised Learning Lesson Title Learning Outcomes CLUSTERING ➔ Learn the basics of clustering Data ➔ Cluster data with the K-means algorithm HIERARCHICAL AND DENSITY-BASED CLUSTERING ➔ Cluster data with Single Linkage Clustering➔ Cluster data with DBSCAN, a clustering method that captures the insight that clusters are dense groups of points. GAUSSIAN MIXTURE MODELS ➔ Cluster data with Gaussian Mixture Models ➔ Optimize Gaussian Mixture Models with Expectation Maximization. FEATURE SCALING ➔ Learn to scale features in your data ➔ Learn to select the best features for training data. DIMENSIONALITY REDUCTION ➔ Reduce the dimensionality of the data using Principal Component Analysis and Independent Component Analysis.
  • 6. ADVANCED MACHINE LEARNING • Lesson Title Learning Outcomes MACHINE LEARNING TO DEEP LEARNING ➔ The basics of deep learning, including softmax, one-hot encoding, and cross entropy. ➔ Basic linear classification models such as Logistic Regression, and their associated error function. DEEP NEURAL NETWORKS ➔ Review: What is a Neural Network? ➔ Activation functions, sigmoid, tanh, and ReLus. ➔ How to train a neural network using backpropagation and the chain rule. ➔ How to improve a neural network using techniques such as regularization and dropout. CONVOLUTIONAL NEURAL NETWORKS ➔ What is a Convolutional Neural Network? ➔ How CNNs are used in image recognition. Train a Quadcopter to Fly : Lesson Content: Reinforcement Learning Lesson Title Learning Outcomes WELCOME TO RL ➔ The basics of reinforcement learning and OpenAI Gym. THE RL FRAMEWORK: THE PROBLEM ➔ Learn how to define Markov Decision Processes to solve real-world problems. THE RL FRAMEWORK: THE SOLUTION ➔ Learn about policies and value functions. ➔ Derive the Bellman Equations. DYNAMIC PROGRAMMING ➔ Write your own implementations of iterative policy evaluation, policy improvement, policy Iteration, and value Iteration. MONTE CARLO METHODS ➔ Implement classic Monte Carlo prediction and control methods. ➔ Learn about greedy and epsilon-
  • 7. TEMPORAL-DIFFERENCE METHODS ➔ Learn the difference between the Sarsa, Q-Learning, and Expected Sarsa algorithms. RL IN CONTINUOUS SPACES ➔ Learn how to adapt traditional algorithms to work with continuous spaces. DEEP Q-LEARNING ➔ Extend value-based reinforcement learning methods to complex problems using deep neural networks. POLICY GRADIENTS ➔ Policy-based methods try to directly optimize for the optimal policy. Learn how they work, and why they are important, especially for domains with continuous action spaces. ACTOR-CRITIC METHODS ➔ Learn how to combine value-based and policy-based methods, bringing together the best of both worlds, to solve challenging reinforcement learning problems. Useful Title: 1. Introductory Topics 2. Linear Regression and Feature Selection 3. Linear Classification 4. Support Vector Machines and Artificial Neural Networks 5. Bayesian Learning and Decision Trees 6. Evaluation Measures 7. Hypothesis Testing 8. Ensemble Methods 9. Clustering 10. Graphical Models 11. Learning Theory and Expectation Maximization
  • 8. Introduction to Machine Learning: • Machine learning is a subfield of computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.Machine learning explores the construction and study of algorithms that can learn from and make predictions on data.Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions,rather than following strictly static program instructions. Machine learning is closely related to and often overlaps with computational statistics; a discipline that also specializes in prediction-making. It has strong ties to mathematical optimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms is infeasible. Example applications include spam filtering optical character recognition (OCR) • Ever since computers were invented, we have wondered whether they might be made to learn. If we could understand how to program them to learn-to improve automatically with experience-the impact would be dramatic. Imagine computers learning from medical records which treatments are most effective for new diseases, houses learning from experience to optimize energy costs based on the particular usage patterns of their occupants, or personal software assistants learning the evolving interests of their users in order to highlight especially relevant stories from the online morning newspaper. A successful understanding of how to make computers learn would open up many new uses of computers and new levels of competence and customization. And a detailed understanding of informationprocessing algorithms for machine learning might lead to a better understanding of human learning abilities (and disabilities) as well. • However, algorithms have been invented that are effective for certain types of learning tasks, and a theoretical understanding of learning is beginning to emerge. Many practical computer programs have been developed to exhibit useful types of learning, and significant commercial applications have begun to appear. For problems such as speech recognition, algorithms based on machine learning outperform all other approaches that have been
  • 9. • In the topics of face recognition, face detection, and facial age estimation, machine learning plays an important role and is served as the fundamental technique in many existing literatures. For example, in face recognition, many researchers focus on using dimensionality reduction techniques for extracting personal features. The most well-known ones are eigenfaces [1], which is based on principal component analysis (PCA,) fisherfaces [2], which is based on linear discriminant analysis (LDA). • Machine learning techniques are also widely used in facial age estimation to extractthe hardly found features and to build the mapping from the facial features to the predicted age. Although machine learning is not the only method in pattern recognition (for example, there are still many researches aiming to extract useful features through image and video analysis), it could provide some theoretical analysis and practical guidelines to refine and improve the recognition performance. • Based on the large amount of available data and the intrinsic ability to learn knowledge from data, we believe that the machine learning techniques will attract much more attention in pattern recognition, data mining, and information. retrieval. • Human designers often produce machines that do not work as well as desired in the environments in which they are used. In fact, certain characteristics of the working environment might not be completely known at design time. Machine learning methods can be used for on-the-job improvement of existing machine designs. • The amount of knowledge available about certain tasks might be too large for explicit encoding by humans. Machines that learn this knowledge gradually might be able to capture more of it than humans would want to write down. • Environments change over time. Machines that can adapt to a changing environment would reduce the need for constant redesign. • New knowledge about tasks is constantly being discovered by humans.Vocabulary changes. There is a constant stream of new events in the world. Continuing redesign of AI systems to conform to new knowledge is impractical, but machine learning methods might be able to track much of it.
  • 10. What is Machine Learning? • Learning, like intelligence, covers such a broad range of processes that it is difficult to define precisely.There are several parallels between animal and machine learning. Certainly, many techniques in machine learning derive from the efforts of psychologists to make more precise their theories of animal and human learning through computational models. It seems likely also that the concepts and techniques being explored by researchers in machine learning may illuminate certain aspects of biological learning. A machine learns whenever it changes its structure, program, or data (based on its inputs or in response to external information) in such a manner that its expected future performance improves. Some of these changes, such as the addition of a record to a data base, fall comfortably within the province of other disciplines and are not necessarily better understood for being called learning. But, for example, when the performance of a speech-recognition machine improves after hearing several samples of a person’s speech, we feel quite justified in that case to say that the machine has learned. Machine learning usually refers to the changes in systems that perform tasks associated with artificial intelligence (AI). Such tasks involve recognition, diagnosis, planning, robot control, prediction, etc. The changes" might be either enhancements to already performing systems or ab initio synthesis of new systems. To be slightly more specific, we show the architecture of a typical AI agent".This agent perceives and models its environment and computes appropriate actions, perhaps by anticipating their effects. Changes made to any of the components shown in the figure might count as learning. Different learning mechanisms might be employed depending on which subsystem is being changed.
  • 11. • But there are important engineering reasons as well. Some of these are: • Some tasks cannot be defined well except by example; that is, we might be able to specify input/output pairs but not a concise relationship between inputs and desired outputs. We would like machines to be able to adjust their internal structure to produce correct outputs for a large number of sample inputs and thus suitably constrain their input/output function to approximate the relationship implicit in the examples. • It is possible that hidden among large piles of data are important relationships and correlations. Machine learningmethods can often be used to extract these relationships (data mining). • In machine learning, data plays an indispensable role, and the learning algorithm is used to discover and learn knowledge or properties from the data. The quality or quantity of the dataset will affect the learning and prediction performance.
  • 12. Learning and Tasks in Machine Learning. • Machine learning tasks are typically classified into three broad categories, depending on the nature of the learning “signal” or “feedback” available to a learning system. • Supervised learning: The computer is presented with example inputs and their desired outputs, given by a “teacher”, and the goal is to learn a general rule that maps inputs to outputs. • Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input.Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end. • Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle), without a teacher explicitly telling it whether it has come close to its goal or not. Another example is learning to play a game by playing against an opponent. Between supervised and unsupervised learning is semisupervised learning, where the teacher gives an incomplete training signal: a training set with some (often many) of the target outputs missing. Transduction is a special case of this principle where the entire set of problem instances is known at learning time, except that part of the targets are missing. Among other categories of machine learning problems, learning to learn learns its own inductive bias based on previous experience. Developmental learning, elaborated for robot learning, generates its own sequences (also called curriculum) of learning situations to cumulatively acquire repertoires of novel skills through autonomous. self-exploration and social interaction with human teachers, and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation. Another categorization of machine learning tasks arises when one considers the desired output of a machinelearned system. • In classification, inputs are divided into two or more classes, and the learner must produce a model that assigns unseen inputs to one (or multi-label classification) or more of these classes. This is typically tackled in a supervised way. Spam filtering is an example of classification, where the inputs are email (or other) messages and the classes are “spam” and
  • 13. • In clustering, a set of inputs is to be divided into groups. Unlike in classification, the groups are not known beforehand, making this typically an unsupervised task. • Density estimation finds the distribution of inputs insome space. • Dimensionality reduction simplifies inputs by mapping them into a lower-dimensional space. Topic modeling is a related problem, where a program is given a list of human language documents and is tasked to find out which documents cover similar topics. Wellsprings of Machine Learning : Work in machine learning is now converging from several sources. These different traditions each bring different methods and different vocabulary which are now being assimilated into a more unified discipline. • Statistics: A long-standing problem in statistics is how best to use samples drawn from unknown probability distributions to help decide from which distribution some new sample is drawn. A related problem is how to estimate the value of an unknown function at a new point given the values of this function at a set of sample points. Statistical methods for dealing with these problems can be considered instances of machine learning because the decision and estimation rules depend on a corpus of samples drawn from the problem environment. • Brain Models: Non-linear elements with weighted inputs have been suggested as simple models of biological neurons. Brain modelers are interested in how closely these networks approximate the learning phenomena of living brains. several important machine learning techniques are based on networks of nonlinear elements|often called neural networks. • Adaptive Control Theory: The problem of controlling a process having unknown parameters which must be estimated during operation.Often, the parameters change during operation, and the control process must track these changes. Some aspects of controlling a robot based on sensory inputs represent instances of this sort of problem. • Psychological Models: Psychologists have studied the performance of humans in various learning tasks. An early example is the EPAM network for storing and retrieving one member of a pair of words.Some of the work in reinforcement learning can be traced to efforts to model how reward stimuli influence the learning of goal-seeking behavior in animals.
  • 14. • Artificial Intelligence: From the beginning, AI research has been concerned with machine learning. Samuel developed a prominent early program that learned parameters of a function for evaluating board positions in the game of checkers. AI researchers have also explored the role of analogies in learning and how future actions and decisions can be based on previous exemplary cases.Recent work has been directed at discovering rules for expert systems using decision-tree methods and inductive logic programming .Another theme has been saving and generalizing the results of problem solving using explanation-based learning. • Evolutionary Models: In nature, not only do individual animals learn to perform better, but species evolve to be better fit in their individual niches. Since the distinction between evolving and learning can be blurred in computer systems, techniques that model certain aspects of biological evolution have been proposed as learning methods to improve the performance of computer programs. Genetic algorithms and genetic programming are the most prominent computational technique for evolution. Varieties of Machine Learning : Orthogonal to the question of the historical source of any learning technique is the more important question of what is to be learned. In this book, we take it that the thing to be learned is a computational structure of some sort. A variety of different computational structures: • Functions • Logic programs and rule sets • Finite-state machines • Grammars • Problem solving systems The change to the existing structure might be simply to make it more computationally efficient rather than to increase the coverage of the situations it can handle. Much of the terminology that we shall be using throughout the book is best introduced
  • 16. • The terminology used in describing the problem of learning a function. Imagine that there is a function, f, and the task of the learner is to guess what it is. Our hypothesis about the function to be learned is denoted by h. Both f and h are functions of a vector-valued input X = (x1; x2; : : : ; xi; : : : ; xn) which has n components. We think of h as being implemented by a device that has X as input and h(X) as output. Both f and h themselves may be vector-valued. We assume a priori that the hypothesized function, h, is selected from a class of functions H. Sometimes we know that f also belongs to this class or to a subset of this class. We select h based on a training set, Ξ, of m input vector examples. Types of Learning : There are two major settings in which we wish to learn a function. In one, called supervised learning, we know (sometimes only approximately) the values of f for the m samples in the training set, Ξ. We assume that if we can find a hypothesis, h, that closely agrees with f for the members of Ξ, then this hypothesis will be a good guess for f -especially if Ξ is large. Curve-fitting is a simple example of supervised learning of a function. Suppose we are given the values of a two-dimensional function, f, at the four sample points shown by the solid circles.
  • 17. • To fit these four points with a function, h, drawn from the set, H, of second-degree functions. show there a two- dimensional parabolic surface above the x1, x2 plane that fits the points. This parabolic function, h, is our hypothesis about the function, f, that produced the four samples. In this case, h = f at the four samples, but we need not have required exact matches. In the other setting, termed unsupervised learning, we simply have a training set of vectors without function values for them. The problem in this case, typically, is to partition the training set into subsets, Ξ1, . . . , ΞR, in some appropriate way. Unsupervised learning methods have application in taxonomic problems in which it is desired to invent ways to classify data into meaningful categories.To find a new function, h, or to modify an existing one. An interesting special case is that of changing an existing function into an equivalent one that is computationally more efficient. This type of learning is sometimes called speed-up learning. A very simple example of speed-up learning involves deduction processes. From the formulas A ⊃ B and B ⊃ C, deduce C if we are given A. From this deductive process, create the formula A ⊃ C|a new formula but one that does not sanction any more conclusions than those that could be derived from the formulas that we previously had. derive C more quickly, given A, contrast speed-up learning with methods that create genuinely new functions|ones that might give different results after learning than they did before.The latter methods involve inductive learning.As opposed to deduction, there are no correct inductions|only useful ones. Input Vectors Because machine learning methods derive from so many different traditions, its terminology is rife with synonyms, and we will be using most of them in this book. For example, the input vector is called by a variety of names. Some of these are: input vector, pattern vector, feature vector, sample, example, and instance. The components, xi, of the input vector are variously called features, attributes, input variables, and components. The values of the components can be of three main types. They might be real-valued numbers, discrete-valued numbers, or categorical values. As an example illustrating categorical values, information about a student might be represented by the values of the attributes class, major, sex, adviser. A particular student would then be represented by a vector such as: (sophomore, history, male, higgins). Additionally, categorical values may be ordered (as in f small, medium, largeg) or unordered (as in the example just given). Of course, mixtures of all these types of values are possible In all cases, it is possible to represent the input in unordered form by listing the names of the attributes together with their values. The vector form assumes that the attributes are ordered and given implicitly by a form.
  • 18. • An important specialization uses Boolean values, which can be regarded as a special case of either discrete numbers (1,0) or of categorical variables (True,False). • Outputs : • The output may be a real number, in which case the process embodying the function, h, is called a function estimator, and the output is called an output value or estimate. Alternatively, the output may be a categorical value, in which case the process embodying h is variously called a classifier, a recognizer, or a categorizer, and the output itself is called a label, a class, a category, or a decision. Classifiers have application in a number of recognition problems, for example in the recognition of hand-printed characters. The input in that case is some suitable representation of the printed character, and the classifier maps this input into one of, say, 64 categories.Vector-valued outputs are also possible with components being real numbers or categorical values. An important special case is that of Boolean output values. In that case, a training pattern having value 1 is called a positive instance, and a training sample having value 0 is called a negative instance. When the input is also Boolean, the classifier implements a Boolean function. We study the Boolean case in some detail because it allows us to make important general points in a simplified setting. Learning a Boolean function is sometimes called concept learning, and the function is called a concept. • Training Regimes : • There are several ways in which the training set, Ξ, can be used to produce a hypothesized function. In the batch method, the entire training set is available and used all at once to compute the function, h. A variation of this method uses the entire training set to modify a current hypothesis iteratively until an acceptable hypothesis is obtained. By contrast, in the incremental method, we select one member at a time from the training set and use this instance alone to modify a current hypothesis. Then another member of the training set is selected, and so on. The selection method can be random (with replace or it can cycle through the training set iteratively. If the entire training set becomes available one member at a time, then we might also use an incremental method|selecting and using training set members as they arrive. (Alternatively, at any stage all training set members so far available could be used in a batch" process.) Using the training set members as they become available is called an online method. Online methods might be used, for example, when the next training instance is some function of the current hypothesis and the previous instance|as it would be when a classifier is used to
  • 19. • Noise : • Sometimes the vectors in the training set are corrupted by noise. There are two kinds of noise. Class noise randomly alters the value of the function; attribute noise randomly alters the values of the components of the input vector. In either case, it would be inappropriate to insist that the hypothesized function agree precisely with the values of the samples in the training set. • Performance Evaluation • Even though there is no correct answer in inductive learning, it is important to have methods to evaluate the result of learning. We will discuss this matter in more detail later, but, briefly, in supervised learning the induced function is usually evaluated on a separate set of inputs and function values for them called the testing set . A hypothesized function is said to generalize when it guesses well on the testing set. Both mean-squared-error and the total number of errors are common measures. • Basic Concepts and Ideals of Machine Learning : • people are easily facing some decisions to make. For example, if the sky is cloudy, we may decide to bring an umbrella or not. For a machine to make these kinds of choices, the intuitive way is to model the problem into a mathematical expression. The mathematical expression could directly be designed from the problem background. For instance, the vending machine could use the standards and security decorations of currency to detect false money. While in some other problems that we can only acquire several measurements and the corresponding labels, but do not know the specific relationship among them, learning will be a better way to find the underlying connection. Another great illustration to distinguish designing from learning is the image compression technique. JPEG, the most widely used image compression standard, exploits the block-based DCT to extract the spatial frequencies and then unequally quantizes each frequency component to achieve data compression. The success of using DCT comes from not only the image properties, but also the human visual perception. While without counting the side information, the KL transform (Karhunen-Loeve transform), which learns the best projection basis for a given image, has been proved to best reduce the redundancy [11]. In many literatures, the knowledge acquired from human understandings or the intrinsic factors of problems are called the domain