Machine Learning Overview
Machine Learning Overview
Sources
• AAAI. Machine Learning.
http://www.aaai.org/Pathfinder/html/machine.html
• Dietterich, T. (2003). Machine Learning. Nature Encyclopedia of
Cognitive Science.
• Doyle, P. Machine Learning.
http://www.cs.dartmouth.edu/~brd/Teaching/AI/Lectures/Summaries/lear
ning.html
• Dyer, C. (2004). Machine Learning.
http://www.cs.wisc.edu/~dyer/cs540/notes/learning.html
• Mitchell, T. (1997). Machine Learning.
• Nilsson, N. (2004). Introduction to Machine Learning.
http://robotics.stanford.edu/people/nilsson/mlbook.html
• Russell, S. (1997). Machine Learning. Handbook of Perception and
Cognition, Vol. 14, Chap. 4.
• Russell, S. (2002). Artificial Intelligence: A Modern Approach, Chap. 18-
20. http://aima.cs.berkeley.edu
What is Learning?
• “Learning denotes changes in a system that ... enable
a system to do the same task … more efficiently the
next time.” - Herbert Simon
• “Learning is constructing or modifying representations
of what is being experienced.” - Ryszard Michalski
• “Learning is making useful changes in our minds.” -
Marvin Minsky
statistics
decision theory
information theory machine
learning
cognitive science
databases
psychological models
evolutionary neuroscience
models
ENVIRONMENT
changes
learning performance
element element actions
knowledge
learning goals
problem
generator
Learning Element
Design affected by:
• performance element used
• e.g., utility-based agent, reactive agent, logical
agent
• functional component to be learned
• e.g., classifier, evaluation function, perception-
action function,
• representation of functional component
• e.g., weighted linear function, logical theory, HMM
• feedback available
• e.g., correct action, reward, relative preferences
Dimensions of Learning Systems
• type of feedback
• supervised (labeled examples)
• unsupervised (unlabeled examples)
• reinforcement (reward)
• representation
• attribute-based (feature vector)
• relational (first-order logic)
• use of knowledge
• empirical (knowledge-free)
• analytical (knowledge-guided)
Outline
• Supervised learning
• empirical learning (knowledge-free)
• attribute-value representation
• logical representation
• analytical learning (knowledge-guided)
• Reinforcement learning
• Unsupervised learning
• Performance evaluation
• Computational learning theory
Inductive (Supervised) Learning
Basic Problem: Induce a representation of a function (a
systematic relationship between inputs and outputs)
from examples.
• target function f: X → Y
• example (x,f(x))
• hypothesis g: X → Y such that g(x) = f(x)
Key Concepts
• entropy
• impurity of a set of examples (entropy = 0 if perfectly
homogeneous)
• (#bits needed to encode class of an arbitrary example)
• information gain
• expected reduction in entropy caused by partitioning
Decision Tree Induction: Attribute Selection
• Weight Update
• perceptron training rule
• linear programming
• delta rule
• backpropagation
Neural Network Learning: Decision Boundary
Learning a Classifier
• optimal linear separator is one that has the
largest margin between positive examples on
one side and negative examples on the other
• = quadratic programming optimization
Support Vector Machines (continued)
Key Concept: Training data enters optimization problem
in the form of dot products of pairs of points.
• support vectors
• weights associated with data points are zero except for those
points nearest the separator (i.e., the support vectors)
• kernel function K(xi,xj)
• function that can be applied to pairs of points to evaluate dot
products in the corresponding (higher-dimensional) feature
space F (without having to directly compute F(x) first)
Ф
Bayesian Networks
Settings
• known structure, fully observable (parameter learning)
• unknown structure, fully observable (structural
learning)
• known structure, hidden variables (EM algorithm)
• unknown structure, hidden variables (?)
Nearest Neighbor Models
Key Idea: Properties of an input x are likely to be similar
to those of points in the neighborhood of x.
Output
• Grandparent(x,y)
[z Mother(x,z) Mother(z,y)] [z Mother(x,z) Father(z,y)]
[z Father(x,z) Mother(z,y)] [z Father(x,z) Father(z,y)]
Learning Logic Theories
Key Concepts
• specialization
• triggered by false positives (goal: exclude negative examples)
• achieved by adding conditions, dropping disjuncts
• generalization
• triggered by false negatives (goal: include positive examples)
• achieved by dropping conditions, adding disjuncts
Learning
• current-best-hypothesis: incrementally improve single
hypothesis (e.g., sequential covering)
• least-commitment search: maintain all hypotheses
consistent with examples seen so far (e.g., version
space)
Learning Logic Theories: Decision Boundary
Learning Logic Theories: Decision Boundary
Learning Logic Theories: Decision Boundary
Learning Logic Theories: Decision Boundary
Learning Logic Theories: Decision Boundary
Analytical Learning
Prior Knowledge in Learning
Recall:
Grandparent(x,y)
[z Mother(x,z) Mother)] [z Mother(x,z) Father(z,y)]
[z Father(x,z) Mother(z,y)] [z Father(x,z) Father(z,y)]
• Suppose initial theory also included:
• Parent(x,y) [Mother(x,y) Father(x,y)]
• Final Hypothesis:
• Grandparent(x,y) [z Parent(x,z) Parent(z,y)]
• utility problem
• cost of determining if learned knowledge is applicable may
outweight benefits from its application
Relevance-Based Learning
Mary travels to Brazil and meets her first Brazilian
(Fernando), who speaks Portuguese. She concludes
that all Brazilians speak Portuguese but not that all
Brazilians are named Fernando.
exploration/exploitation tradeoff
Approaches
• clustering (similarity-based)
• density estimation (e.g., EM algorithm)
Performance Tasks
• understanding and visualization
• anomaly detection
• information retrieval
• data compression
Performance Evaluation
• Randomly split examples into training set U
and test set V.
• Use training set to learn a hypothesis H.
• Measure % of V correctly classified by H.
• Repeat for different random splits and average
results.
Performance Evaluation: Learning Curves
classification accuracy
classification error
#training examples
Performance Evaluation: ROC Curves
false negatives
false positives
Performance Evaluation: Accuracy/Coverage
classification accuracy
coverage
Triple Tradeoff in Empirical Learning
• size/complexity of
learned classifier
• amount of training data
• generalization accuracy
bias-variance tradeoff
Computational Learning Theory
probably approximately correct (PAC) learning
With probability 1 - , error will be .
Key Concepts
• examples drawn from same distribution (stationarity
assumption)
• sample complexity is a function of confidence, error,
and size of hypothesis space
1 1
m (ln ln | H |)
Current Machine Learning Research
• Representation
• data sequences
• spatial/temporal data
• probabilistic relational models
• …
• Approaches
• ensemble methods
• cost-sensitive learning
• active learning
• semi-supervised learning
• collective classification
• …