Decision Trees in Machine Learning: A Comprehensive Guide

In our journey through the Machine Learning Mastery series, we have explored linear models and statistical foundations. Now, we move into one of the most intuitive and powerful algorithms in the supervised learning toolkit: Decision Trees. Whether you are building a recommendation engine or a medical diagnostic tool, Decision Trees provide a clear, logical path to making predictions.

What is a Decision Tree?

A Decision Tree is a non-parametric supervised learning method used for both classification and regression tasks. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.

Think of a Decision Tree as a flowchart where each internal node represents a "test" on an attribute (e.g., "Is the temperature higher than 30 degrees?"), each branch represents the outcome of the test, and each leaf node represents a class label or a continuous value.

Key Terminology

  • Root Node: The top-most node in a tree that represents the entire population or sample. It gets divided into two or more homogeneous sets.
  • Splitting: The process of dividing a node into two or more sub-nodes.
  • Decision Node: When a sub-node splits into further sub-nodes, it is called a decision node.
  • Leaf/Terminal Node: Nodes that do not split are called Leaf nodes. They represent the final output or prediction.
  • Pruning: The process of removing sub-nodes of a decision node to prevent overfitting. It is the opposite of splitting.

How Decision Trees Make Decisions

Decision Trees use various algorithms to decide how to split a node into two or more sub-nodes. The creation of sub-nodes increases the homogeneity (purity) of the resulting sub-nodes. In other words, the purity of the node increases with respect to the target variable.

1. Entropy and Information Gain

Used primarily in ID3 (Iterative Dichotomiser 3) algorithms, Entropy measures the impurity or randomness in the data. Information Gain is the decrease in entropy after a dataset is split on an attribute. The algorithm chooses the attribute that maximizes information gain.

2. Gini Impurity

Used by the CART (Classification and Regression Tree) algorithm, Gini Impurity measures the frequency at which a randomly chosen element from the set would be incorrectly labeled. A Gini score of 0 means the node is pure.

Visualizing a Decision Tree

To understand how a model might decide whether to play golf based on weather conditions, look at this logical flow:

[Root: Outlook]
|
|-- (Sunny) --> [Humidity]
|               |-- (High) --> Result: No
|               |-- (Normal) --> Result: Yes
|
|-- (Overcast) --> Result: Yes
|
|-- (Rainy) --> [Windy]
                |-- (Strong) --> Result: No
                |-- (Weak) --> Result: Yes
    

Practical Example: Pseudo-Logic for Java Developers

While Python is popular for ML, understanding the logic is essential for any Java developer. Here is how a simple decision structure might look in code when implementing manual rules:

public String predictPlayGolf(String outlook, String humidity, boolean windy) {
    if (outlook.equals("Overcast")) {
        return "Yes";
    } else if (outlook.equals("Sunny")) {
        if (humidity.equals("High")) {
            return "No";
        } else {
            return "Yes";
        }
    } else if (outlook.equals("Rainy")) {
        if (windy) {
            return "No";
        } else {
            return "Yes";
        }
    }
    return "Unknown";
}
    

Advantages and Disadvantages

Advantages

  • Easy to Understand: The output is visual and follows human-like logic.
  • Minimal Data Preparation: Unlike Linear Regression, it doesn't require feature scaling or normalization.
  • Handles Both Data Types: Can handle both numerical and categorical data.

Disadvantages

  • Overfitting: Trees can become overly complex, capturing noise instead of the underlying pattern.
  • Instability: Small variations in data can result in a completely different tree structure.
  • Bias: Decision trees can be biased if some classes dominate the dataset.

Common Mistakes to Avoid

  • Not Pruning the Tree: Allowing a tree to grow to its maximum depth usually leads to overfitting. Always use parameters like max_depth or min_samples_leaf.
  • Ignoring Feature Correlation: While trees handle non-linear relationships well, highly correlated features can sometimes lead to redundant splits.
  • Ignoring Class Imbalance: If 90% of your data belongs to one class, the tree will likely learn to predict that class most of the time. Use techniques like oversampling or class weighting.

Real-World Use Cases

  • Banking: Determining credit worthiness and loan default risks.
  • Healthcare: Identifying high-risk patients based on symptoms and medical history.
  • E-commerce: Predicting whether a customer will churn or stay based on their behavior.

Interview Preparation Notes

  • Difference between Bagging and Boosting: Decision trees are the building blocks for Random Forests (Bagging) and Gradient Boosting Machines (Boosting), which we will cover in Topic 8.
  • Handling Missing Values: Some implementations of decision trees can handle missing values internally by following the majority branch or using surrogate splits.
  • Bias-Variance Tradeoff: A deep tree has low bias but high variance (overfitting), while a shallow tree has high bias but low variance (underfitting).

Summary

Decision Trees are a foundational concept in Machine Learning. They provide a transparent way to model complex decisions and are the precursor to more advanced ensemble methods. By understanding how to split nodes using Entropy or Gini Impurity and how to prevent overfitting through pruning, you can build robust models for a variety of tasks.

In the next lesson, Topic 8: Random Forests, we will see how combining multiple decision trees can significantly improve prediction accuracy and stability.