ID4 Algorithm - Incremental Decision Tree Learning
ID4 Algorithm - Incremental Decision Tree Learning
In ID4, we are effectively combining the decision tree with the decision
tree learning algorithm.
To support incremental learning, we can ask any node in the tree to
update itself given a new example.
2. If the node is a terminal node, but the example’s action does not
match, then we make the node into a decision and use the ID3 algorithm
to determine the best split to make.
3. If the node is not a terminal node, then it is already a decision. We
determine the best attribute to make the decision on, adding the new
example to the current list. The best attribute is determined using the
information gain metric, as we saw in ID3.
If the attribute returned is the same as the current attribute for the
decision (and it will be most times), then we determine which of the
daughter nodes the new example gets mapped to, and we update that
daughter node with the new example.
If the attribute returned is different, then it means the new example
makes a different decision optimal. If we change the decision at this point,
then all of the tree further down the current branch will be invalid. So we
delete the whole tree from the current decision down and perform the
basic ID3 algorithm using the current decision’s examples plus the new
one.
The example tree in ID4 format
Walk Through
• It is difficult to visualize how ID4 works from the algorithm description alone,
so let’s work through an example.
• We have seven examples. The first five are similar to those used before:
– Healthy Exposed Empty Run
– Healthy In Cover With Ammo Attack
– Hurt In Cover With Ammo Attack
– Healthy In Cover Empty Defend
– Hurt In Cover Empty Defend
• We use these to create our initial decision tree(before ID4). The decision tree
looks like that shown figure
• We now add two new examples, one at a time, using ID4:
Eg 1. Hurt Exposed With Ammo Defend
Eg 2. Healthy Exposed With Ammo Run
The first example enters at the first decision node. ID4 uses the new
example, along with the five existing examples, to determine that ammo is the
best attribute to use for the decision.
This matches the current decision, so the example is sent to the
appropriate daughter node. Currently, the daughter node is an action: attack.
The action doesn’t match, so we need to create a new decision here.
Using the basic ID3 algorithm, we decide to make the decision based
on cover. Each of the daughters of this new decision have only one example and
are therefore action nodes. The current decision tree is then as shown in Figure
• Now we add our second example,(Healthy Exposed With Ammo Run)
again entering at the root node. ID4 determines that this time ammo can’t
be used (based on Information gain), so cover is the best attribute to use
in this decision.
• So we throw away the sub-tree from this point down (which is the whole
tree, since we’re at the first decision) and run an ID3 algorithm with all the
examples. The ID3 algorithm runs in the normal way and leaves the tree
complete. It is shown in Figure.