Bias-Variance Trade-off Explorer
Heart Disease Risk Prediction - Bias-Variance Trade-off
See how regularization affects overfitting in medical diagnosis
Click to add patients. Solid circles = training data, dashed = test data
(Smooth boundaries)Low Regularization
(Complex boundaries)
Performance Metrics
Balanced regularization. Training and test accuracy are similar and reasonably high.
Training Set
Test Set
Performance Gap
Large gaps indicate overfitting
Key Insights
- • Low C: Simple boundaries, may underfit
- • High C: Complex boundaries, may overfit
- • Optimal C: Best test performance
- • Large train-test gap = overfitting
Interactive demonstration of the bias-variance trade-off using a medical diagnosis scenario. Adjust the regularization parameter C to see how decision boundaries change: low C produces smooth, potentially underfitting boundaries while high C creates complex boundaries that may overfit to training data. Click to add patients and watch how the model’s predictions and performance metrics respond.
Appears in themes
The regularization parameter C controls the bias-variance trade-off directly. Low C penalizes complexity, yielding high-bias/low-variance models. High C allows the model to fit training data closely, risking high variance. The train-test accuracy gap reveals when you've crossed from learning signal to memorizing noise.
Precision, recall, and F1 scores tell different stories about model errors. In medical diagnosis, false negatives (missed disease) may be costlier than false positives. The gap between training and test metrics is itself a statistic - a measure of generalization failure.