From linear regression to deep neural networks — master the algorithms, concepts, and real-world applications that power modern AI systems.
Machine learning branches into distinct families based on how models learn from data.
Six essential ideas every ML practitioner needs to understand deeply.
The model learns a mapping from inputs to outputs using labeled training examples. Each example is a (input, correct output) pair. The model minimizes the difference between its predictions and the true labels.
No labels — the model discovers hidden structure in raw data on its own. Clustering groups similar points together; dimensionality reduction finds compact representations. Used for anomaly detection, compression, and exploration.
An agent learns by interacting with an environment — taking actions, receiving rewards or penalties, and updating its policy. No labeled data; learning emerges from trial and error over thousands of episodes.
Overfitting memorises training noise and fails on new data. Underfitting is too simple to capture the signal. The sweet spot — the bias–variance tradeoff — balances both errors for best generalisation.
Transforming raw data into meaningful inputs that improve model performance. Often the most impactful step — a good feature can outperform a complex algorithm on poor features.
Beyond accuracy: precision, recall, F1-score, and ROC-AUC reveal how well a model performs across classes. The confusion matrix shows exactly where predictions succeed or fail.
Eight foundational algorithms, each with visual intuition for how they learn and when to use them.
Fits a line (or hyperplane) through data by minimising squared residuals. Best for continuous outputs with linear relationships.
Applies a sigmoid function to output probabilities for binary (or multi-class) classification. Fast, interpretable, great baseline.
Recursively splits data on the best feature threshold. Highly interpretable; prone to overfitting without pruning.
Trains many trees on random data/feature subsets and aggregates votes. Robust, handles outliers, built-in feature importance.
Partitions data into K clusters by iteratively assigning points to nearest centroid and recomputing centres. Choose K via elbow method.
Layers of weighted neurons learn hierarchical representations via backpropagation. Foundation of modern deep learning and AI.
Finds the maximum-margin hyperplane separating classes. Works well in high dimensions; kernel trick handles non-linear boundaries.
Sequential ensemble that fits each tree to the residuals of previous trees. XGBoost/LightGBM variants win tabular data competitions.
Where machine learning creates measurable value across industries today.
AI-assisted diagnosis from medical imaging — detecting tumours, diabetic retinopathy, and anomalies with radiologist-level accuracy.
Real-time transaction scoring flags anomalous patterns in milliseconds, protecting billions in card spend without blocking legitimate purchases.
Collaborative filtering and neural embeddings personalise product suggestions, driving 35% of Amazon's revenue via "customers also bought".
Predictive maintenance analyses sensor telemetry to forecast equipment failures before they happen, cutting unplanned downtime by up to 50%.
Transformer models parse customer reviews, social posts, and support tickets to surface brand perception and escalation risks at scale.
Object detection and segmentation power autonomous vehicles, quality inspection, retail checkout, and security systems globally.
Demand forecasting for ride-sharing and logistics uses weather, events, and historical patterns to optimise fleet positioning in real time.
Models trained on engagement scores, tenure, and compensation data flag flight-risk employees months before resignation, enabling proactive retention.
Click the canvas to place data points, then train a model to see a live decision boundary appear.
Uses a k-nearest neighbours approach on a pixel grid — each pixel is coloured by the majority class among its 5 nearest training points. Place at least 2 points per class, then hit Train.
Hand-picked courses, docs, and tools — quality over quantity.
Top-down, code-first approach. Build real models in week 1. Free, world-class.
FreeBeginner-friendlyThe foundational MOOC. Covers regression, neural nets, and best practices from the pioneer himself.
CourseraCertificatesStunning visual explanations of neural networks, gradient descent, and linear algebra fundamentals.
YouTubeVisualClear, friendly explanations of stats and ML algorithms — famous for making the complex feel obvious.
YouTubeStatisticsSix stages from zero to production-ready ML engineer.
20 essential concepts, one click to copy. Hover for the definition.
💡 Click any card to copy the definition to your clipboard
A structured progression through machine learning — from predictive AI fundamentals and neural networks to cloud-scale infrastructure, MLOps, and advanced architecture.
Core concepts for every ML practitioner. Covers supervised, unsupervised, and semi-supervised learning; key functional designs (Computer Vision, NLP/NLU, Pattern Recognition); three core network types (FFNNs, CNNs, RNNs); and a repeatable process for building AI systems from requirements through deployment.
Deep dive into neural network components. Covers all major activation functions (Sigmoid, Tanh, ReLU, Leaky ReLU, Softmax, Softplus), the full neuron cell type taxonomy, and 30+ named architectures from the original Perceptron (1958) to LSTM, GAN, and Transformer models.
ML at cloud scale: GPU/TPU processing units, MLaaS, container-based AI deployment, feature stores, cloud-based training (supervised/unsupervised/federated), pre-built Predictive AI APIs, automated deployment and monitoring, and cloud AI governance frameworks.
Cloud-native design patterns for production AI systems: Serverless Data Pipeline, Distributed Feature Store, Continuous Data Validation, Hybrid Data Processing, Distributed Model Training, AI Model Drift Detection, Federated AI Learning, AI Workload Autoscaling, and Containerized Model Deployment.
A widely-used, repeatable 12-step process for building any predictive AI system — from problem definition through continuous refinement. This sequence reflects best practices across industry and research for end-to-end ML delivery.
Precise definitions for the vocabulary every ML practitioner needs — covering learning approaches, network types, training mechanics, and deployment concepts.
8 questions covering core ML concepts — learning approaches, neural network architectures, training mechanics, and deployment. Click an answer to reveal the explanation.
From Rosenblatt's 1958 Perceptron to the Transformer era — the founding papers, breakthrough moments, and key researchers who created modern machine learning.
Precise definitions grounded in the original papers — with author, year, and venue for each term. The vocabulary of ML as defined by the researchers who created it.
8 questions drawn from founding papers and landmark results. Click an answer to reveal the explanation and paper citation.