← Artifacts

Bias-Variance Trade-off Explorer

2025

Heart Disease Risk Prediction - Bias-Variance Trade-off

See how regularization affects overfitting in medical diagnosis

Click to add patients. Solid circles = training data, dashed = test data

High Regularization
(Smooth boundaries)
Low Regularization
(Complex boundaries)

Performance Metrics

Good Fit

Balanced regularization. Training and test accuracy are similar and reasonably high.

Training Set

Accuracy:0.000
F1 Score:0.000
Precision:0.000
Recall:0.000

Test Set

Accuracy:0.000
F1 Score:0.000
Precision:0.000
Recall:0.000

Performance Gap

Train - Test Accuracy:0.000

Large gaps indicate overfitting

Key Insights

  • • Low C: Simple boundaries, may underfit
  • • High C: Complex boundaries, may overfit
  • • Optimal C: Best test performance
  • • Large train-test gap = overfitting

Interactive demonstration of the bias-variance trade-off using a medical diagnosis scenario. Adjust the regularization parameter C to see how decision boundaries change: low C produces smooth, potentially underfitting boundaries while high C creates complex boundaries that may overfit to training data. Click to add patients and watch how the model’s predictions and performance metrics respond.

Appears in themes

The regularization parameter C controls the bias-variance trade-off directly. Low C penalizes complexity, yielding high-bias/low-variance models. High C allows the model to fit training data closely, risking high variance. The train-test accuracy gap reveals when you've crossed from learning signal to memorizing noise.

Precision, recall, and F1 scores tell different stories about model errors. In medical diagnosis, false negatives (missed disease) may be costlier than false positives. The gap between training and test metrics is itself a statistic - a measure of generalization failure.