← Artifacts

Machine Learning

ML models and applications

Artifacts exploring machine learning techniques and applications.

Artifacts

2025

The regularization parameter C controls the bias-variance trade-off directly. Low C penalizes complexity, yielding high-bias/low-variance models. High C allows the model to fit training data closely, risking high variance. The train-test accuracy gap reveals when you've crossed from learning signal to memorizing noise.

2025

Persistent homology extracts topological features from point cloud data at multiple scales. The filtration process shown here is foundational to topological data analysis (TDA), revealing structure that clustering algorithms miss.

2025

PCA reduces the four Iris measurements to three principal components. Here those components become wave frequencies - a literal sonification of dimensionality reduction. The interference patterns reveal geometric relationships invisible in the raw data.

2025

Shows the decomposition explicitly: PC1 contributes most variance (largest amplitude), PC2 and PC3 add detail. The superposition is a weighted sum where eigenvalues determine each component's contribution.