Machine Learning Basic
Machine Learning Basic
MACHINE LEARNING
Liviu Ciortuz
Department of CS, University of Iaşi, România
1.
Statistical Pattern
Learning Recognition
Statistics Engineering
(model fitting)
4.
Bibliography
1. “Machine Learning”
Tom Mitchell. McGraw-Hill, 1997
test/generalization
data
predicted
classification
10.
Basic ML Terminology
1. instance x, instance set X
concept c ⊆ X, or c : X → {0, 1}
example (labeled instance): hx, c(x)i; positive examples, neg. examples
2. hypotheses h : X → {0, 1}
hypotheses representation language
hypotheses set H
hypotheses consistent with the concept c: h(x) = c(x), ∀ example hx, c(x)i
version space
3. learning = train + test
supervised learning (classification), unsupervised learning (clustering)
4. errorh = | {x ∈ X, h(x) 6= c(x)} |
training error, test error
accuracy, precision, recall
5. validation set, development set
n-fold cross-validation, leave-one-out cross-validation
overfitting
11.
Inductive Bias
Consider
• a concept learning algorithm L
• the instances X, and the target concept c
• the training examples Dc = {hx, c(x)i}.
• Let L(xi , Dc ) denote the classification assigned to the instance xi by L
after training on data Dc .
Definition:
The inductive bias of L is any minimal set of assertions B such
that
(∀xi ∈ X)[(B ∨ Dc ∨ xi ) ⊢ L(xi , Dc )]
for any target concept c and corresponding training examples Dc .
(A ⊢ B means A logically entails B)
13.
Inductive systems
can be modelled by
equivalent deductive
systems
14.
h tp
c precision: P=
tp + fp
tp
fp recall (or: sensitivity): R=
fn tp tp + fn
F-measure: F = 2 P+R
P × R
tn
specificity: Sp = tn
tn + fp
tp − true positives fp
follout: =
fp − false positives tn + fp
tn − true negatives
fn − false negatives Mathew’s Correlation Coefficient:
tp × tn − fp × fn
MCC = q
(tp + fp)×(tn + fn)×(tp + fn)×(tn + fp)
15.
Lazy learning vs. eager learning algorithms
Eager: generalize before seeing query
◦ ID3, Backpropagation, Naive Bayes, Radial basis function net-
works, . . .
• Must create global approximation
Lazy: wait for query before generalizing
◦ k-Nearest Neighbor, Locally weighted regression, Case based rea-
soning
• Can create many local approximations
Does it matter?
If they use the same hypothesis space H, lazy learners can represent
more complex functions.
E.g., a lazy Backpropagation algorithm can learn a NN which is dif-
ferent for each query point, compared to the eager version of Back-
propagation.
16.
ADDENDA
xxx