Taiju Sanagi: Experiments

Decision Boundary

Note
Updated: April 22, 2025

πŸ”€ Decision Boundary

In classification problems, a decision boundary is the surface (line in 2D) that separates different predicted classes.

It’s the point where the model is undecided β€” where the prediction flips from one class to another.

1. Why Do We Need It?

In regression, we predict a continuous value.
In classification, we choose a category.

To classify data, the model learns how to divide the input space into regions β€” one region per class.
The borderline between regions is the decision boundary.

2. Logistic Regression Example

In binary classification, logistic regression computes a probability between 0 and 1:

y^=Οƒ(wβ‹…x+b)\hat{y} = \sigma(w \cdot x + b)

We predict class 1 when y^β‰₯0.5\hat{y} \geq 0.5, and class 0 otherwise.

So the decision boundary is where:

y^=0.5β‡’wβ‹…x+b=0\hat{y} = 0.5 \quad \Rightarrow \quad w \cdot x + b = 0

This is a straight line (or hyperplane in higher dimensions) β€” the place where the model is exactly 50% confident.

3. Other Models

Different models produce different kinds of boundaries:

  • Linear models (like logistic regression): straight lines
  • k-NN: irregular, local shapes
  • SVM with kernels: smooth curves
  • Decision Trees: axis-aligned splits

Summary

  • A decision boundary separates one class from another.
  • It shows where the model switches its prediction.
  • Visualizing boundaries helps us understand how the model thinks β€” and whether it generalizes well.