Unit 5 ML
Unit 5 ML
Tom M. Mitchell
Outline
Two formulations for learning: Inductive and Analytical
Perfect domain theories and Prolog-EBG
3
A Positive Example
4
The Inductive Generalization Problem
Given:
◦ Instances
◦ Hypotheses
◦ Target Concept
◦ Training examples of target concept
Determine:
◦ Hypotheses consistent with the training examples
The Analytical Generalization
Problem(Cont’)
Given:
◦ Instances
◦ Hypotheses
◦ Target Concept
◦ Training examples of target concept
◦ Domain theory for explaining examples
Determine:
◦ Hypotheses consistent with the training examples
An Analytical Generalization Problem
Learning from Perfect Domain
Theories
Assumes domain theory is correct (error-free)
◦ Prolog-EBG is algorithm that works under this assumption
◦ This assumption holds in chess and other search problems
◦ Allows us to assume explanation = proof
◦ Later we’ll discuss methods that assume approximate domain
theories
Prolog EBG
Initialize hypothesis = {}
For each positive training example not covered by hypothesis:
1. Explain how training example satisfies target concept, in terms of domain
theory
2. Analyze the explanation to determine the most general conditions under
which this explanation (proof) holds
3. Refine the hypothesis by adding a new rule, whose preconditions are the
above conditions, and whose consequent asserts the target concept
Explanation of a Training Example
Computing the Weakest Preimage of
Explanation
Regression Algorithm
Lessons from Safe-to-Stack Example
Justified generalization from single example
Explanation determines feature relevance
Regression determines needed feature constraints
Generality of result depends on domain theory
Still require multiple examples
Perspectives on Prolog-EBG
?
Are you learning when you get better over time at chess?
◦ Even though you already know everything in principle, once you know rules of the game...
Is it learning