Decision Boundary
π Decision Boundary
In classification problems, a decision boundary is the surface (line in 2D) that separates different predicted classes.
Itβs the point where the model is undecided β where the prediction flips from one class to another.
1. Why Do We Need It?
In regression, we predict a continuous value.
In classification, we choose a category.
To classify data, the model learns how to divide the input space into regions β one region per class.
The borderline between regions is the decision boundary.
2. Logistic Regression Example
In binary classification, logistic regression computes a probability between 0 and 1:
We predict class 1 when , and class 0 otherwise.
So the decision boundary is where:
This is a straight line (or hyperplane in higher dimensions) β the place where the model is exactly 50% confident.
3. Other Models
Different models produce different kinds of boundaries:
- Linear models (like logistic regression): straight lines
- k-NN: irregular, local shapes
- SVM with kernels: smooth curves
- Decision Trees: axis-aligned splits
Summary
- A decision boundary separates one class from another.
- It shows where the model switches its prediction.
- Visualizing boundaries helps us understand how the model thinks β and whether it generalizes well.