Bonus challenges. No prerequisites. Pure fun.
Discover why accuracy alone can be dangerously misleading, and learn to evaluate models with precision, recall, and F1 score.
Learn when and why to use different cross-validation strategies like KFold vs StratifiedKFold, and see how the choice impacts your evaluation.
Pit five classic classifiers against each other using cross-validation, then rank them to find the champion.
Generate a noisy sine wave, fit polynomial models of varying complexity, and visualize underfitting vs overfitting in action.
Compare dropna, mean imputation, and median imputation to see which strategy preserves the most predictive power.
Transform string columns like Sex and Embarked into numeric features using one-hot encoding and label encoding.
Apply StandardScaler, MinMaxScaler, and RobustScaler to housing data and compare how each affects KNN model performance.
Race the clock to build the best iris classifier you can in under a minute.
Train a model and achieve over 90% accuracy in as few lines of code as possible. Elegance counts.
Push past 95% accuracy on handwritten digit recognition. Whatever model it takes.
Create three visualizations that reveal hidden stories in the Titanic passenger data.
Create a synthetic dataset from scratch with specific statistical properties โ controlled means, correlations, categories, and missing values.
Five bugs hidden in a machine learning pipeline. Find them all, fix them all, and prove your debugging instincts.