Decision Tree
Decision Tree
Decision Tree
Decision Tree
Decision Tree : Decision tree is the most powerful and popular tool for classification
and prediction. A Decision tree is a flowchart like tree structure, where each internal
node denotes a test on an attribute, each branch represents an outcome of the test, and
each leaf node (terminal node) holds a class label.
The decision tree in above figure classifies a particular morning according to whether it
is suitable for playing tennis and returning the classification associated with the
particular leaf.(in this case Yes or No).
For example,the instance
(Outlook = Rain, Temperature = Hot, Humidity = High, Wind = Strong )
would be sorted down the leftmost branch of this decision tree and would therefore be
classified as a negative instance.
In other words we can say that decision tree represent a disjunction of conjunctions of
constraints on the attribute values of instances.
(Outlook = Sunny ^ Humidity = Normal) v (Outllok = Overcast) v (Outlook = Rain ^ Wind
= Weak)
Strengths and Weakness of Decision Tree approach
The strengths of decision tree methods are:
Decision trees are able to generate understandable rules.
Decision trees perform classification without requiring much computation.
Decision trees are able to handle both continuous and categorical variables.
Decision trees provide a clear indication of which fields are most important for
prediction or classification.
The weaknesses of decision tree methods :
Decision trees are less appropriate for estimation tasks where the goal is to
predict the value of a continuous attribute.
Decision trees are prone to errors in classification problems with many class and
relatively small number of training examples.
Decision tree can be computationally expensive to train. The process of growing
a decision tree is computationally expensive. At each node, each candidate
splitting field must be sorted before its best split can be found. In some
algorithms, combinations of fields are used and a search must be made for
optimal combining weights. Pruning algorithms can also be expensive since
many candidate sub-trees must be formed and compared.