Machine Learning
ML models and applications
Artifacts exploring machine learning techniques and applications.
Artifacts
The regularization parameter C controls the bias-variance trade-off directly. Low C penalizes complexity, yielding high-bias/low-variance models. High C allows the model to fit training data closely, risking high variance. The train-test accuracy gap reveals when you've crossed from learning signal to memorizing noise.
Persistent homology extracts topological features from point cloud data at multiple scales. The filtration process shown here is foundational to topological data analysis (TDA), revealing structure that clustering algorithms miss.
PCA reduces the four Iris measurements to three principal components. Here those components become wave frequencies - a literal sonification of dimensionality reduction. The interference patterns reveal geometric relationships invisible in the raw data.
Shows the decomposition explicitly: PC1 contributes most variance (largest amplitude), PC2 and PC3 add detail. The superposition is a weighted sum where eigenvalues determine each component's contribution.