All Topics

Browse every topic available on QuantPrep. Use the search to find exactly what you're looking for.

QuantLab
Linear Algebra
Advanced Statistics
Probability for Quants
Stochastic Calculus
Machine Learning
The Two Views of a Vector
Vector Operations
The Dot Product, Norms, and Angles
Orthogonality
The Two Views of a Matrix
Matrix Operations
Matrix Multiplication
Special Matrices
Linear Combinations and Span
Linear Independence
Basis and Dimension
Vector Spaces and Subspaces
Framing the Problem: Ax=b
Gaussian Elimination
The Solutions to Ax=b
Reduced Row Echelon Form (RREF)
LU Decomposition
Column Space & Rank
The Null Space
Row Space & Left Null Space
The Fundamental Theorem of Linear Algebra
The Geometric Meaning of the Determinant
Calculation and Properties
Eigenvalues & Eigenvectors
The Characteristic Equation
Diagonalization (PDP⁻¹)
Applications of Eigen-analysis
The Spectral Theorem
The Cholesky Decomposition (LLᵀ)
The Inexact Problem: Why Ax=b Often Has No Solution
The Geometry of "Best Fit": Projections
The Algebraic Solution: The Normal Equations
The Problem with the Normal Equations
The Stable Solution: The Gram-Schmidt Process
The QR Decomposition
The Singular Value Decomposition (SVD)
Principal Component Analysis (PCA)
Advanced SVD Applications
The Covariance Matrix & Portfolio Risk
Portfolio Optimization & The Efficient Frontier
The Capital Asset Pricing Model (CAPM)
Arbitrage & The Fundamental Theorem of Asset Pricing
Markov Chains for State Transitions
Fixed Income (Bond) Mathematics
Hypothesis Testing Guide
An Introduction to Hypothesis Testing
T-Test
Z-Test
ANOVA
F-Test
Pearson Correlation
Chi-Squared Test
Mann-Whitney U Test
Kruskal-Wallis Test
Wilcoxon Signed-Rank Test
Spearman's Rank Correlation
Friedman Test
Kolmogorov-Smirnov (K-S) Test
Hypothesis Testing & P-Values
Descriptive Statistics Explorer
The Central Limit Theorem (CLT)
Confidence Intervals
Z-Table Calculator
The Normal Distribution
Monte Carlo Simulation
Time Series Decomposition
Autocorrelation (ACF & PACF)
Volatility & Standard Deviation (GARCH)
Efficient Frontier & Sharpe Ratio
Kalman Filters
Stochastic Calculus & Ito's Lemma
Lesson 1.0: Set Theory, Sample Spaces, and Events
Lesson 1.1: Axioms of Probability (Kolmogorov)
Lesson 1.2: Conditional Probability and Independence
Lesson 1.3: Law of Total Probability and Bayes' Theorem
Lesson 1.4: Probability Mass Functions (PMF) and Cumulative Distribution Functions (CDF)
Lesson 1.5: Expected Value E[X], Variance Var[X], and Standard Deviation
Lesson 1.6: Common Discrete Distributions (Binomial, Poisson, Geometric)
Lesson 1.7: The Master Tool: Moment Generating Functions (MGFs)
Lesson 1.8: The Continuous World: PDFs and Smooth CDFs
Lesson 1.9: The Calculus of Center and Spread
Lesson 1.10: The Continuous Toolbox
Lesson 1.11: Masterclass on Continuous MGFs
Lesson 1.12: Thinking in Multiple Dimensions: Joint Distributions
Lesson 1.13: Slicing the Probability Landscape
Lesson 1.14: Measuring Relationships: Covariance & Correlation
Lesson 1.15: The Ultimate Separation: Statistical Independence
Lesson 2.0: Properties of the Normal Distribution and the Z-Score
Lesson 2.1: Linear Combinations of Independent Normal Random Variables
Lesson 2.2: Multivariate Normal Distribution
Lesson 2.3: Marginal and Conditional Distributions of MVN
Lesson 2.4: Capstone 1: Applications in Portfolio Theory
Lesson 2.5: The χ² (Chi-Squared) Distribution
Lesson 2.6: The t-Distribution (Student's t)
Lesson 2.7: The F-Distribution (Fisher-Snedecor)
Lesson 2.8: The Law of the Average: The WLLN
Lesson 2.9: Convergence in Distribution and the Central Limit Theorem (CLT)
Lesson 2.10: Capstone 2: The CLT in Action (Python Simulation)
Lesson 2.11: Slutsky's Theorem and the Delta Method
Lesson 3.0: Definition of a Statistic and an Estimator
Lesson 3.1: Unbiasedness, Bias, and Asymptotic Unbiasedness
Lesson 3.2: Efficiency and the Cramér-Rao Lower Bound (CRLB)
Lesson 3.3: Consistency and Sufficiency
Lesson 3.4: Method of Moments (MoM) Estimation
Lesson 3.5: Maximum Likelihood Estimation (MLE)
Lesson 3.6: Finding MLE Estimates via Optimization
Lesson 3.7: General Construction of Confidence Intervals (CIs)
Lesson 3.8: Deriving CIs for Mean and Variance
Lesson 3.9: Null vs. Alternative Hypotheses, Type I and II Errors
Lesson 3.10: Testing with p-values and Critical Regions
Lesson 3.11: Neyman-Pearson Lemma for Optimal Tests
Lesson 3.12: Likelihood Ratio Tests (LRT) and Wilks' Theorem
Simple Linear Regression (SLR)
Derivation of the OLS Estimators (for SLR)
Properties of the Fitted Model (R-Squared, Residuals)
Multiple Linear Regression (MLR) in Matrix Form
Derivation of the MLR OLS Estimator (in Matrix Form)
The Classical Linear Model Assumptions (The Bedrock)
The Gauss-Markov Theorem and the BLUE Property
t-Tests for Individual Coefficients
F-tests for Joint Hypotheses and Overall Model Significance
Multicollinearity and Variance Inflation Factor (VIF)
Heteroskedasticity: Detection and Robust Standard Errors
Autocorrelation: Detection and the Durbin-Watson Test
Capstone: Building and Testing a Fama-French Factor Model
Lesson 5.0: Introduction to Time Series Analysis (Trends, Seasonality, Cycles)
Lesson 5.1: The Bedrock of Time Series: Stationarity (Strict vs. Weak)
Identifying Lags with ACF and PACF Plots
The Autoregressive (AR) Model Explained
The Moving Average (MA) Model Explained
Building Forecasting Models with ARMA
Handling Non-Stationary Data with ARIMA Models
A Practical Guide to Model Selection: Box-Jenkins Methodology
Modeling Volatility with ARCH Models
Advanced Volatility Forecasting with GARCH Models
Capstone: Building a GARCH Model to Forecast Stock Market Volatility
Lesson 6.0: The Foundation: Random Walks and the Efficient Market Hypothesis
Lesson 6.1: A Deeper Look: Martingales and Predictability
Lesson 6.2: The Language of Options Pricing: Geometric Brownian Motion (GBM)
Lesson 6.3: The Brute-Force Solution: Monte Carlo Simulation for Pricing and Risk
Lesson 6.4: Resampling Reality: Bootstrapping for Estimating Standard Errors
Lesson 6.5: A Deeper Dive into Resampling: The Jackknife
Lesson 6.6: From One Series to Many: Vector Autoregression (VAR) Models
Lesson 6.7: Finding Long-Run Relationships: Cointegration
Lesson 6.8: Putting It Together: The Vector Error Correction Model (VECM)
Lesson 6.9: Capstone: Building a Pairs Trading Strategy with Cointegration
Bayes' Theorem
Bernoulli Distribution
Binomial Distribution
Poisson Distribution
Geometric Distribution
Hypergeometric Distribution
Negative Binomial Distribution
Discrete Uniform Distribution
Multinomial Distribution
Gamma Distribution
Beta Distribution
Exponential Distribution
Cauchy Distribution
Laplace Distribution
Weibull Distribution
Logistic Distribution
The Basics: Sample Spaces & Events
Combinatorics: The Art of Counting
Conditional Probability & Independence
Bayes' Theorem
Random Variables (Discrete & Continuous)
Expectation, Variance & Moments
Common Discrete Distributions
Joint, Marginal & Conditional Distributions
Covariance & Correlation
The Law of Large Numbers (LLN)
The Central Limit Theorem (CLT)
Transformations of Random Variables
Introduction to Information Theory
Introduction to Stochastic Processes & Stationarity
Discrete-Time Markov Chains
The Poisson Process
Random Walks & Brownian Motion
Sigma-Algebras & Probability Measures
The Lebesgue Integral & Rigorous Expectation
Martingales
Introduction to Itô Calculus
The Language of ML: Data, Features & Labels
The Three Flavors of Learning (Supervised, Unsupervised, Reinforcement)
Your First Models: An Intuitive Look (KNN & Simple Linear Regression)
The Golden Rule: How to Split Your Data (Train, Validate, Test)
How Do We Score a Model? (Accuracy, Confusion Matrix, MSE)
Mental Math for Interviews
The Financial ML Landscape (Alpha, Risk, Execution)
Feature Engineering for Financial Data (Price, Volume, Order Books)
Core Predictive Models (Trees, Boosting, Regularization)
Backtesting & Model Validation (Walk-Forward, Sharpe Ratio)
Classical Time-Series Models (ARIMA, GARCH)
Deep Learning for Sequences (LSTMs, Transformers)
Stationarity & Memory in Markets (ADF Test, Frac. Diff.)
Building Trading Signals from Predictions (Meta-Labeling)
Credit Default Prediction & Scoring
Anomaly & Financial Fraud Detection (Isolation Forests, Autoencoders)
Modeling Volatility & Value-at-Risk (VaR)
Model Explainability & Interpretability (SHAP, LIME)
Financial Sentiment Analysis (News, Earnings Reports, Tweets)
Information Extraction (NER, Topic Modeling)
Advanced Text Representation (Word2Vec, Transformers - BERT)
Integrating NLP Signals into Trading Models
Reinforcement Learning for Optimal Trading
Portfolio Optimization with ML (Covariance Estimation)
Leveraging Alternative Data (Satellite Imagery, Web Data)
AI Ethics & Regulation in Finance
Key Concepts - Mean and Variance
Standard Deviation (The "Intuitive" Spread)
The Normal Distribution N(μ, σ²)
Calculus Review - Derivatives (The "Slope")
Calculus Review - Integrals (The "Sum")
The Rules of "Normal" Infinitesimals
Why Normal Calculus Fails
Brownian Motion (Wiener Process Wt)
The "Weird" Scaling Property
A Model for Stocks (Geometric Brownian Motion)
The Failure of Path Length
Quadratic Variation (The "Aha!" Moment)
The New Rules of Algebra
The Itô Integral
Prerequisite - The Taylor Expansion
Itô's Lemma (Simple Case, for f(Wt))
Prerequisite - The 2-Variable Taylor Expansion
Itô's Lemma (The Full Version, for f(t, Xt))
Understanding the Full Formula (Translating Math to Finance)
Lesson 4.1: The "Magic Portfolio"
Lesson 4.2: Eliminating Risk
Lesson 4.3: Eliminating Drift μ
Lesson 4.4: The "No-Free-Lunch" Argument & The Final PDE
The Solution (The Black-Scholes Formula)
Delta (Δ): The "Speed" of an Option
Gamma (Γ): The "Acceleration"
Vega (ν): The "Jiggle Risk"
Theta (Θ): The "Melting Ice Cube"
Rho (ρ): The "Interest Rate Risk"
The "Other Way" - Risk-Neutral Valuation
The "Computer Way" - Monte Carlo Methods
Fixing σ - Stochastic Volatility (Heston Model)
Fixing "No Jumps" - Jump-Diffusion (Merton Model)
Fixing r - Stochastic Interest Rates (Vasicek & CIR)
QuantLab
Linear Algebra
Advanced Statistics
Probability for Quants
Stochastic Calculus
Machine Learning
The Two Views of a Vector
Vector Operations
The Dot Product, Norms, and Angles
Orthogonality
The Two Views of a Matrix
Matrix Operations
Matrix Multiplication
Special Matrices
Linear Combinations and Span
Linear Independence
Basis and Dimension
Vector Spaces and Subspaces
Framing the Problem: Ax=b
Gaussian Elimination
The Solutions to Ax=b
Reduced Row Echelon Form (RREF)
LU Decomposition
Column Space & Rank
The Null Space
Row Space & Left Null Space
The Fundamental Theorem of Linear Algebra
The Geometric Meaning of the Determinant
Calculation and Properties
Eigenvalues & Eigenvectors
The Characteristic Equation
Diagonalization (PDP⁻¹)
Applications of Eigen-analysis
The Spectral Theorem
The Cholesky Decomposition (LLᵀ)
The Inexact Problem: Why Ax=b Often Has No Solution
The Geometry of "Best Fit": Projections
The Algebraic Solution: The Normal Equations
The Problem with the Normal Equations
The Stable Solution: The Gram-Schmidt Process
The QR Decomposition
The Singular Value Decomposition (SVD)
Principal Component Analysis (PCA)
Advanced SVD Applications
The Covariance Matrix & Portfolio Risk
Portfolio Optimization & The Efficient Frontier
The Capital Asset Pricing Model (CAPM)
Arbitrage & The Fundamental Theorem of Asset Pricing
Markov Chains for State Transitions
Fixed Income (Bond) Mathematics
Hypothesis Testing Guide
An Introduction to Hypothesis Testing
T-Test
Z-Test
ANOVA
F-Test
Pearson Correlation
Chi-Squared Test
Mann-Whitney U Test
Kruskal-Wallis Test
Wilcoxon Signed-Rank Test
Spearman's Rank Correlation
Friedman Test
Kolmogorov-Smirnov (K-S) Test
Hypothesis Testing & P-Values
Descriptive Statistics Explorer
The Central Limit Theorem (CLT)
Confidence Intervals
Z-Table Calculator
The Normal Distribution
Monte Carlo Simulation
Time Series Decomposition
Autocorrelation (ACF & PACF)
Volatility & Standard Deviation (GARCH)
Efficient Frontier & Sharpe Ratio
Kalman Filters
Stochastic Calculus & Ito's Lemma
Lesson 1.0: Set Theory, Sample Spaces, and Events
Lesson 1.1: Axioms of Probability (Kolmogorov)
Lesson 1.2: Conditional Probability and Independence
Lesson 1.3: Law of Total Probability and Bayes' Theorem
Lesson 1.4: Probability Mass Functions (PMF) and Cumulative Distribution Functions (CDF)
Lesson 1.5: Expected Value E[X], Variance Var[X], and Standard Deviation
Lesson 1.6: Common Discrete Distributions (Binomial, Poisson, Geometric)
Lesson 1.7: The Master Tool: Moment Generating Functions (MGFs)
Lesson 1.8: The Continuous World: PDFs and Smooth CDFs
Lesson 1.9: The Calculus of Center and Spread
Lesson 1.10: The Continuous Toolbox
Lesson 1.11: Masterclass on Continuous MGFs
Lesson 1.12: Thinking in Multiple Dimensions: Joint Distributions
Lesson 1.13: Slicing the Probability Landscape
Lesson 1.14: Measuring Relationships: Covariance & Correlation
Lesson 1.15: The Ultimate Separation: Statistical Independence
Lesson 2.0: Properties of the Normal Distribution and the Z-Score
Lesson 2.1: Linear Combinations of Independent Normal Random Variables
Lesson 2.2: Multivariate Normal Distribution
Lesson 2.3: Marginal and Conditional Distributions of MVN
Lesson 2.4: Capstone 1: Applications in Portfolio Theory
Lesson 2.5: The χ² (Chi-Squared) Distribution
Lesson 2.6: The t-Distribution (Student's t)
Lesson 2.7: The F-Distribution (Fisher-Snedecor)
Lesson 2.8: The Law of the Average: The WLLN
Lesson 2.9: Convergence in Distribution and the Central Limit Theorem (CLT)
Lesson 2.10: Capstone 2: The CLT in Action (Python Simulation)
Lesson 2.11: Slutsky's Theorem and the Delta Method
Lesson 3.0: Definition of a Statistic and an Estimator
Lesson 3.1: Unbiasedness, Bias, and Asymptotic Unbiasedness
Lesson 3.2: Efficiency and the Cramér-Rao Lower Bound (CRLB)
Lesson 3.3: Consistency and Sufficiency
Lesson 3.4: Method of Moments (MoM) Estimation
Lesson 3.5: Maximum Likelihood Estimation (MLE)
Lesson 3.6: Finding MLE Estimates via Optimization
Lesson 3.7: General Construction of Confidence Intervals (CIs)
Lesson 3.8: Deriving CIs for Mean and Variance
Lesson 3.9: Null vs. Alternative Hypotheses, Type I and II Errors
Lesson 3.10: Testing with p-values and Critical Regions
Lesson 3.11: Neyman-Pearson Lemma for Optimal Tests
Lesson 3.12: Likelihood Ratio Tests (LRT) and Wilks' Theorem
Simple Linear Regression (SLR)
Derivation of the OLS Estimators (for SLR)
Properties of the Fitted Model (R-Squared, Residuals)
Multiple Linear Regression (MLR) in Matrix Form
Derivation of the MLR OLS Estimator (in Matrix Form)
The Classical Linear Model Assumptions (The Bedrock)
The Gauss-Markov Theorem and the BLUE Property
t-Tests for Individual Coefficients
F-tests for Joint Hypotheses and Overall Model Significance
Multicollinearity and Variance Inflation Factor (VIF)
Heteroskedasticity: Detection and Robust Standard Errors
Autocorrelation: Detection and the Durbin-Watson Test
Capstone: Building and Testing a Fama-French Factor Model
Lesson 5.0: Introduction to Time Series Analysis (Trends, Seasonality, Cycles)
Lesson 5.1: The Bedrock of Time Series: Stationarity (Strict vs. Weak)
Identifying Lags with ACF and PACF Plots
The Autoregressive (AR) Model Explained
The Moving Average (MA) Model Explained
Building Forecasting Models with ARMA
Handling Non-Stationary Data with ARIMA Models
A Practical Guide to Model Selection: Box-Jenkins Methodology
Modeling Volatility with ARCH Models
Advanced Volatility Forecasting with GARCH Models
Capstone: Building a GARCH Model to Forecast Stock Market Volatility
Lesson 6.0: The Foundation: Random Walks and the Efficient Market Hypothesis
Lesson 6.1: A Deeper Look: Martingales and Predictability
Lesson 6.2: The Language of Options Pricing: Geometric Brownian Motion (GBM)
Lesson 6.3: The Brute-Force Solution: Monte Carlo Simulation for Pricing and Risk
Lesson 6.4: Resampling Reality: Bootstrapping for Estimating Standard Errors
Lesson 6.5: A Deeper Dive into Resampling: The Jackknife
Lesson 6.6: From One Series to Many: Vector Autoregression (VAR) Models
Lesson 6.7: Finding Long-Run Relationships: Cointegration
Lesson 6.8: Putting It Together: The Vector Error Correction Model (VECM)
Lesson 6.9: Capstone: Building a Pairs Trading Strategy with Cointegration
Bayes' Theorem
Bernoulli Distribution
Binomial Distribution
Poisson Distribution
Geometric Distribution
Hypergeometric Distribution
Negative Binomial Distribution
Discrete Uniform Distribution
Multinomial Distribution
Gamma Distribution
Beta Distribution
Exponential Distribution
Cauchy Distribution
Laplace Distribution
Weibull Distribution
Logistic Distribution
The Basics: Sample Spaces & Events
Combinatorics: The Art of Counting
Conditional Probability & Independence
Bayes' Theorem
Random Variables (Discrete & Continuous)
Expectation, Variance & Moments
Common Discrete Distributions
Joint, Marginal & Conditional Distributions
Covariance & Correlation
The Law of Large Numbers (LLN)
The Central Limit Theorem (CLT)
Transformations of Random Variables
Introduction to Information Theory
Introduction to Stochastic Processes & Stationarity
Discrete-Time Markov Chains
The Poisson Process
Random Walks & Brownian Motion
Sigma-Algebras & Probability Measures
The Lebesgue Integral & Rigorous Expectation
Martingales
Introduction to Itô Calculus
The Language of ML: Data, Features & Labels
The Three Flavors of Learning (Supervised, Unsupervised, Reinforcement)
Your First Models: An Intuitive Look (KNN & Simple Linear Regression)
The Golden Rule: How to Split Your Data (Train, Validate, Test)
How Do We Score a Model? (Accuracy, Confusion Matrix, MSE)
Mental Math for Interviews
The Financial ML Landscape (Alpha, Risk, Execution)
Feature Engineering for Financial Data (Price, Volume, Order Books)
Core Predictive Models (Trees, Boosting, Regularization)
Backtesting & Model Validation (Walk-Forward, Sharpe Ratio)
Classical Time-Series Models (ARIMA, GARCH)
Deep Learning for Sequences (LSTMs, Transformers)
Stationarity & Memory in Markets (ADF Test, Frac. Diff.)
Building Trading Signals from Predictions (Meta-Labeling)
Credit Default Prediction & Scoring
Anomaly & Financial Fraud Detection (Isolation Forests, Autoencoders)
Modeling Volatility & Value-at-Risk (VaR)
Model Explainability & Interpretability (SHAP, LIME)
Financial Sentiment Analysis (News, Earnings Reports, Tweets)
Information Extraction (NER, Topic Modeling)
Advanced Text Representation (Word2Vec, Transformers - BERT)
Integrating NLP Signals into Trading Models
Reinforcement Learning for Optimal Trading
Portfolio Optimization with ML (Covariance Estimation)
Leveraging Alternative Data (Satellite Imagery, Web Data)
AI Ethics & Regulation in Finance
Key Concepts - Mean and Variance
Standard Deviation (The "Intuitive" Spread)
The Normal Distribution N(μ, σ²)
Calculus Review - Derivatives (The "Slope")
Calculus Review - Integrals (The "Sum")
The Rules of "Normal" Infinitesimals
Why Normal Calculus Fails
Brownian Motion (Wiener Process Wt)
The "Weird" Scaling Property
A Model for Stocks (Geometric Brownian Motion)
The Failure of Path Length
Quadratic Variation (The "Aha!" Moment)
The New Rules of Algebra
The Itô Integral
Prerequisite - The Taylor Expansion
Itô's Lemma (Simple Case, for f(Wt))
Prerequisite - The 2-Variable Taylor Expansion
Itô's Lemma (The Full Version, for f(t, Xt))
Understanding the Full Formula (Translating Math to Finance)
Lesson 4.1: The "Magic Portfolio"
Lesson 4.2: Eliminating Risk
Lesson 4.3: Eliminating Drift μ
Lesson 4.4: The "No-Free-Lunch" Argument & The Final PDE
The Solution (The Black-Scholes Formula)
Delta (Δ): The "Speed" of an Option
Gamma (Γ): The "Acceleration"
Vega (ν): The "Jiggle Risk"
Theta (Θ): The "Melting Ice Cube"
Rho (ρ): The "Interest Rate Risk"
The "Other Way" - Risk-Neutral Valuation
The "Computer Way" - Monte Carlo Methods
Fixing σ - Stochastic Volatility (Heston Model)
Fixing "No Jumps" - Jump-Diffusion (Merton Model)
Fixing r - Stochastic Interest Rates (Vasicek & CIR)
QuantLab

Interactive tools for hands-on probability and statistics analysis.

Linear Algebra

Vectors, matrices, and eigenvalues. The language of data.

Advanced Statistics

The science of collecting, analyzing, and interpreting data.

Probability for Quants

Master random variables, distributions, and stochastic processes.

Stochastic Calculus

Derive Black-Scholes from scratch and master the models that power modern finance.

Machine Learning

Building predictive models for financial markets.

The Two Views of a Vector

Vectors as geometric arrows vs. vectors as ordered lists of numbers (the data science view).

Vector Operations

Addition, subtraction (tip-to-tail rule), and scalar multiplication (stretching/shrinking).

The Dot Product, Norms, and Angles

The dot product as a measure of 'projection' or 'agreement.' L1 and L2 norms as measures of length/magnitude. Cosine similarity as a practical application.

Orthogonality

The concept of perpendicular vectors (dot product = 0) and its meaning: independence.

The Two Views of a Matrix

A matrix as a container for data (a collection of vectors) vs. a matrix as a linear transformation that moves, rotates, and scales space.

Matrix Operations

Addition, scalar multiplication, and the transpose.

Matrix Multiplication

Taught not just as a rule, but as the composition of linear transformations. This explains why AB ≠ BA.

Special Matrices

Identity matrix (the 'do nothing' operation), inverse matrix (the 'undo' operation), diagonal, triangular, and symmetric matrices.

Linear Combinations and Span

What can you build with a set of vectors?

Linear Independence

Identifying and removing redundant vectors.

Basis and Dimension

The minimal set of vectors needed to define a space and the concept of its dimension.

Vector Spaces and Subspaces

Formalizing these concepts. A subspace as a 'plane' or 'line' within a higher-dimensional space that passes through the origin.

Framing the Problem: Ax=b

Understanding Ax=b from the row picture (intersection of planes) and the column picture (linear combination of columns).

Gaussian Elimination

The core algorithm for solving linear systems. Row operations, row echelon form (REF).

The Solutions to Ax=b

Identifying if a system has a unique solution, no solution, or infinitely many solutions from its REF.

Reduced Row Echelon Form (RREF)

The ultimate, unique 'answer sheet' for a linear system, removing the need for back-substitution.

LU Decomposition

The 'matrix version' of Gaussian Elimination. Solving Ax=b becomes a fast, two-step process of forward and back substitution.

Column Space & Rank

The space of all possible outputs of A. The concept of rank as the "true dimension" of the output space.

The Null Space

The space of all inputs that map to the zero vector. Its connection to multicollinearity in data.

Row Space & Left Null Space

Completing the picture of the four fundamental subspaces.

The Fundamental Theorem of Linear Algebra

How the four subspaces relate to each other and partition the input and output spaces.

The Geometric Meaning of the Determinant

The determinant as the scaling factor of area/volume.

Calculation and Properties

Cofactor expansion and the properties of determinants. A determinant of zero means the matrix squishes space into a lower dimension (i.e., it's not invertible).

Eigenvalues & Eigenvectors

Finding the 'special' vectors that are only scaled by a transformation, not rotated off their span (Ax = λx).

The Characteristic Equation

The calculation behind eigenvalues: solving det(A - λI) = 0.

Diagonalization (PDP⁻¹)

Decomposing a matrix into its core components: 'changing to the eigenbasis, scaling, and changing back.'

Applications of Eigen-analysis

Using eigenvalues for tasks like calculating matrix powers (e.g., for Markov chains).

The Spectral Theorem

For symmetric matrices (like covariance matrices), the eigendecomposition is especially beautiful and stable (A = QDQᵀ). This is the theoretical foundation of PCA.

The Cholesky Decomposition (LLᵀ)

A highly efficient specialization for symmetric, positive-definite matrices, often used in optimization and financial modeling.

The Inexact Problem: Why Ax=b Often Has No Solution

Introducing the goal of minimizing the error ||Ax - b||.

The Geometry of "Best Fit": Projections

Finding the closest point in a subspace (the Column Space) to an external vector.

The Algebraic Solution: The Normal Equations

Deriving AᵀAx̂ = Aᵀb from the projection geometry. This is the engine of Linear Regression.

The Problem with the Normal Equations

Understanding why AᵀA can be ill-conditioned and lead to numerical errors.

The Stable Solution: The Gram-Schmidt Process

An algorithm for creating a "nice" orthonormal basis from any starting basis.

The QR Decomposition

Using Gram-Schmidt to factor A=QR. Show how this makes solving the least squares problem trivial (R = Qᵀb) and numerically robust.

The Singular Value Decomposition (SVD)

The ultimate decomposition (A = UΣVᵀ) that works for any matrix and finds orthonormal bases for all four fundamental subspaces simultaneously.

Principal Component Analysis (PCA)

A direct, powerful application of SVD on the data matrix for dimensionality reduction.

Advanced SVD Applications

Low-rank approximation for noise reduction, and the core ideas behind recommendation systems.

The Covariance Matrix & Portfolio Risk

Deriving portfolio variance from first principles using linear algebra.

Portfolio Optimization & The Efficient Frontier

Using linear algebra to construct optimal portfolios.

The Capital Asset Pricing Model (CAPM)

Understanding the relationship between risk and expected return.

Arbitrage & The Fundamental Theorem of Asset Pricing

Connecting "no free lunch" to the geometry of vector spaces.

Markov Chains for State Transitions

Modeling dynamic systems like credit ratings with transition matrices.

Fixed Income (Bond) Mathematics

Duration and convexity as linear algebraic concepts.

Hypothesis Testing Guide

A comprehensive guide to choosing the right statistical test.

An Introduction to Hypothesis Testing

A practical guide to deciding if your results are a real breakthrough or just random noise.

T-Test

Compares the means of two groups, assuming normal distribution.

Z-Test

Compares means of large samples (n>30) with known population variance.

ANOVA

Compares the averages of three or more groups.

F-Test

Compares the variances (spread) of two or more groups.

Pearson Correlation

Measures the linear relationship between two continuous variables.

Chi-Squared Test

Analyzes categorical data to find significant relationships.

Mann-Whitney U Test

Alternative to the T-Test when data is not normally distributed.

Kruskal-Wallis Test

Alternative to ANOVA for comparing three or more groups.

Wilcoxon Signed-Rank Test

Alternative to the paired T-Test for repeated measurements.

Spearman's Rank Correlation

Measures the monotonic relationship between two ranked variables.

Friedman Test

The non-parametric alternative to a repeated-measures ANOVA.

Kolmogorov-Smirnov (K-S) Test

Tests if a sample is drawn from a specific distribution.

Hypothesis Testing & P-Values

The detective work of data science.

Descriptive Statistics Explorer

Interactive guide to mean, median, skewness, and kurtosis.

The Central Limit Theorem (CLT)

Discover how order emerges from chaos.

Confidence Intervals

Understanding the range where a true value likely lies.

Z-Table Calculator

Calculate probabilities from Z-scores and vice-versa.

The Normal Distribution

The ubiquitous "bell curve."

Monte Carlo Simulation

Using random simulation to solve complex problems.

Time Series Decomposition

Breaking down a time series into its core components.

Autocorrelation (ACF & PACF)

Measuring how a time series correlates with its past values.

Volatility & Standard Deviation (GARCH)

Modeling the changing volatility of financial returns.

Efficient Frontier & Sharpe Ratio

Finding the optimal portfolio for a given level of risk.

Kalman Filters

Dynamically estimating the state of a system from noisy data.

Stochastic Calculus & Ito's Lemma

The calculus of random walks, essential for derivatives pricing.

Lesson 1.0: Set Theory, Sample Spaces, and Events

Understanding the building blocks of probability.

Lesson 1.1: Axioms of Probability (Kolmogorov)

The three fundamental rules that govern all of probability.

Lesson 1.2: Conditional Probability and Independence

How the occurrence of one event affects another.

Lesson 1.3: Law of Total Probability and Bayes' Theorem

Updating your beliefs in the face of new evidence.

Lesson 1.4: Probability Mass Functions (PMF) and Cumulative Distribution Functions (CDF)

Describing the probabilities of discrete outcomes.

Lesson 1.5: Expected Value E[X], Variance Var[X], and Standard Deviation

Calculating the center and spread of a random variable.

Lesson 1.6: Common Discrete Distributions (Binomial, Poisson, Geometric)

Exploring key models for random events.

Lesson 1.7: The Master Tool: Moment Generating Functions (MGFs)

The "fingerprint" of a distribution for deriving moments.

Lesson 1.8: The Continuous World: PDFs and Smooth CDFs

Describing the probabilities of continuous outcomes.

Lesson 1.9: The Calculus of Center and Spread

Calculating moments for continuous random variables.

Lesson 1.10: The Continuous Toolbox

Exploring Uniform, Exponential, and Gamma distributions.

Lesson 1.11: Masterclass on Continuous MGFs

Deriving moments for Normal, Exponential, and Gamma distributions.

Lesson 1.12: Thinking in Multiple Dimensions: Joint Distributions

Modeling multiple random variables simultaneously.

Lesson 1.13: Slicing the Probability Landscape

Extracting marginal and conditional probabilities from joint distributions.

Lesson 1.14: Measuring Relationships: Covariance & Correlation

Measuring how two random variables move together.

Lesson 1.15: The Ultimate Separation: Statistical Independence

Defining when two variables have no influence on each other.

Lesson 2.0: Properties of the Normal Distribution and the Z-Score

Mastering the bell curve and standardization.

Lesson 2.1: Linear Combinations of Independent Normal Random Variables

Understanding how normal variables combine.

Lesson 2.2: Multivariate Normal Distribution

The cornerstone of modern portfolio theory.

Lesson 2.3: Marginal and Conditional Distributions of MVN

Dissecting multi-asset models.

Lesson 2.4: Capstone 1: Applications in Portfolio Theory

Applying MVN properties to portfolio construction and risk management.

Lesson 2.5: The χ² (Chi-Squared) Distribution

The distribution of variances.

Lesson 2.6: The t-Distribution (Student's t)

The backbone of hypothesis testing with small sample sizes.

Lesson 2.7: The F-Distribution (Fisher-Snedecor)

Comparing variances between groups and the foundation of ANOVA.

Lesson 2.8: The Law of the Average: The WLLN

Why casino averages are so stable.

Lesson 2.9: Convergence in Distribution and the Central Limit Theorem (CLT)

Why the normal distribution is everywhere.

Lesson 2.10: Capstone 2: The CLT in Action (Python Simulation)

An interactive simulation to visualize the CLT with different distributions.

Lesson 2.11: Slutsky's Theorem and the Delta Method

Tools for approximating the distribution of functions of random variables.

Lesson 3.0: Definition of a Statistic and an Estimator

Distinguishing between a function of data and a guess for a parameter.

Lesson 3.1: Unbiasedness, Bias, and Asymptotic Unbiasedness

Evaluating the accuracy of estimators.

Lesson 3.2: Efficiency and the Cramér-Rao Lower Bound (CRLB)

Finding the "best" possible unbiased estimator.

Lesson 3.3: Consistency and Sufficiency

Ensuring estimators converge to the true value and use all available information.

Lesson 3.4: Method of Moments (MoM) Estimation

An intuitive technique for finding estimators by matching sample moments to population moments.

Lesson 3.5: Maximum Likelihood Estimation (MLE)

The most important method for parameter estimation in finance.

Lesson 3.6: Finding MLE Estimates via Optimization

The practical side of implementing MLE.

Lesson 3.7: General Construction of Confidence Intervals (CIs)

A framework for creating intervals for any parameter.

Lesson 3.8: Deriving CIs for Mean and Variance

Using t, χ², and Z pivotal quantities to build intervals.

Lesson 3.9: Null vs. Alternative Hypotheses, Type I and II Errors

The fundamental setup of all hypothesis tests.

Lesson 3.10: Testing with p-values and Critical Regions

The two equivalent approaches to making a statistical decision.

Lesson 3.11: Neyman-Pearson Lemma for Optimal Tests

Finding the most powerful test for a given significance level.

Lesson 3.12: Likelihood Ratio Tests (LRT) and Wilks' Theorem

A general method for comparing nested models.

Simple Linear Regression (SLR)

Modeling a relationship with a single predictor.

Derivation of the OLS Estimators (for SLR)

The calculus behind finding the "best fit" line.

Properties of the Fitted Model (R-Squared, Residuals)

Assessing how well your linear model fits the data.

Multiple Linear Regression (MLR) in Matrix Form

Extending SLR to multiple predictors using linear algebra.

Derivation of the MLR OLS Estimator (in Matrix Form)

The matrix algebra for solving a multiple regression problem.

The Classical Linear Model Assumptions (The Bedrock)

Defining the rules for OLS to be BLUE.

The Gauss-Markov Theorem and the BLUE Property

The theoretical justification for using OLS.

t-Tests for Individual Coefficients

Testing the significance of a single predictor.

F-tests for Joint Hypotheses and Overall Model Significance

Testing the significance of a group of predictors or the entire model.

Multicollinearity and Variance Inflation Factor (VIF)

Diagnosing when predictors are too correlated with each other.

Heteroskedasticity: Detection and Robust Standard Errors

Handling non-constant variance in the error terms.

Autocorrelation: Detection and the Durbin-Watson Test

Detecting patterns in the error terms over time.

Capstone: Building and Testing a Fama-French Factor Model

A practical application of MLR to test a famous financial model.

Lesson 5.0: Introduction to Time Series Analysis (Trends, Seasonality, Cycles)

Decomposing the components of a time series (Trend, Seasonality, Cycles, and Noise).

Lesson 5.1: The Bedrock of Time Series: Stationarity (Strict vs. Weak)

The most important property for modeling time series data.

Identifying Lags with ACF and PACF Plots

The key tools for identifying the structure of a time series.

The Autoregressive (AR) Model Explained

Modeling how past values influence the present.

The Moving Average (MA) Model Explained

Modeling how past forecast errors influence the present.

Building Forecasting Models with ARMA

Combining AR and MA models to capture complex dynamics.

Handling Non-Stationary Data with ARIMA Models

Incorporating differencing to model real-world data like stock prices.

A Practical Guide to Model Selection: Box-Jenkins Methodology

The systematic process for identifying, estimating, and validating ARIMA models.

Modeling Volatility with ARCH Models

Introducing models where variance depends on past errors.

Advanced Volatility Forecasting with GARCH Models

The industry-standard model for volatility forecasting.

Capstone: Building a GARCH Model to Forecast Stock Market Volatility

A real-world project to model and forecast the volatility of a major stock index.

Lesson 6.0: The Foundation: Random Walks and the Efficient Market Hypothesis

Exploring the theory that market prices are unpredictable.

Lesson 6.1: A Deeper Look: Martingales and Predictability

The formal mathematical definition of a "fair game" and its implications for financial markets.

Lesson 6.2: The Language of Options Pricing: Geometric Brownian Motion (GBM)

The standard model for stock price paths used in the Black-Scholes formula.

Lesson 6.3: The Brute-Force Solution: Monte Carlo Simulation for Pricing and Risk

Using simulation to solve problems that are too hard for pure math.

Lesson 6.4: Resampling Reality: Bootstrapping for Estimating Standard Errors

A powerful computational method for assessing the uncertainty of an estimate when theory fails.

Lesson 6.5: A Deeper Dive into Resampling: The Jackknife

A related resampling technique for bias and variance estimation.

Lesson 6.6: From One Series to Many: Vector Autoregression (VAR) Models

Modeling the dynamics of multiple time series at once.

Lesson 6.7: Finding Long-Run Relationships: Cointegration

A statistical test for finding stable, long-term relationships between non-stationary time series (the basis of pairs trading).

Lesson 6.8: Putting It Together: The Vector Error Correction Model (VECM)

A model that combines long-run equilibrium (cointegration) with short-run dynamics (VAR).

Lesson 6.9: Capstone: Building a Pairs Trading Strategy with Cointegration

A complete, real-world project to find a cointegrated pair of stocks and build a basic trading strategy.

Bayes' Theorem

A framework for updating beliefs with new evidence.

Bernoulli Distribution

Modeling a single trial with two outcomes.

Binomial Distribution

Modeling a series of success/fail trials.

Poisson Distribution

Modeling the frequency of rare events.

Geometric Distribution

Modeling trials until the first success.

Hypergeometric Distribution

Modeling sampling without replacement.

Negative Binomial Distribution

Modeling trials until a set number of successes.

Discrete Uniform Distribution

Modeling where all outcomes are equally likely.

Multinomial Distribution

Generalizing the Binomial for multiple outcomes.

Gamma Distribution

Modeling waiting times and skewed data.

Beta Distribution

Modeling probabilities, percentages, and proportions.

Exponential Distribution

Modeling the time between events in a Poisson process.

Cauchy Distribution

Modeling extreme events and 'fat-tailed' phenomena.

Laplace Distribution

Modeling with a sharp peak and 'fat tails'.

Weibull Distribution

Modeling time-to-failure and event durations.

Logistic Distribution

A key distribution in machine learning and growth modeling.

The Basics: Sample Spaces & Events

Understanding the building blocks of probability.

Combinatorics: The Art of Counting

Techniques for counting outcomes and possibilities.

Conditional Probability & Independence

How the occurrence of one event affects another.

Bayes' Theorem

Updating beliefs in the face of new evidence.

Random Variables (Discrete & Continuous)

Mapping outcomes of a random process to numbers.

Expectation, Variance & Moments

Calculating the center, spread, and shape of a distribution.

Common Discrete Distributions

Exploring Bernoulli, Binomial, and Poisson distributions.

Joint, Marginal & Conditional Distributions

Modeling the behavior of multiple random variables at once.

Covariance & Correlation

Measuring how two random variables move together.

The Law of Large Numbers (LLN)

Why casino averages are so stable.

The Central Limit Theorem (CLT)

Why the normal distribution is everywhere.

Transformations of Random Variables

Finding the distribution of a function of a random variable.

Introduction to Information Theory

Quantifying information with Entropy and KL Divergence.

Introduction to Stochastic Processes & Stationarity

Understanding random phenomena that evolve over time.

Discrete-Time Markov Chains

Modeling memoryless state transitions.

The Poisson Process

Modeling the timing of random events.

Random Walks & Brownian Motion

The mathematical foundation of stock price movements.

Sigma-Algebras & Probability Measures

The rigorous foundation of modern probability.

The Lebesgue Integral & Rigorous Expectation

A more powerful theory of integration.

Martingales

The formal model of a fair game.

Introduction to Itô Calculus

The calculus of random walks, essential for derivatives pricing.

The Language of ML: Data, Features & Labels

The Three Flavors of Learning (Supervised, Unsupervised, Reinforcement)

Your First Models: An Intuitive Look (KNN & Simple Linear Regression)

The Golden Rule: How to Split Your Data (Train, Validate, Test)

How Do We Score a Model? (Accuracy, Confusion Matrix, MSE)

Mental Math for Interviews

Sharpen your calculation speed and accuracy for interviews.

The Financial ML Landscape (Alpha, Risk, Execution)

Feature Engineering for Financial Data (Price, Volume, Order Books)

Core Predictive Models (Trees, Boosting, Regularization)

Backtesting & Model Validation (Walk-Forward, Sharpe Ratio)

Classical Time-Series Models (ARIMA, GARCH)

Deep Learning for Sequences (LSTMs, Transformers)

Stationarity & Memory in Markets (ADF Test, Frac. Diff.)

Building Trading Signals from Predictions (Meta-Labeling)

Credit Default Prediction & Scoring

Anomaly & Financial Fraud Detection (Isolation Forests, Autoencoders)

Modeling Volatility & Value-at-Risk (VaR)

Model Explainability & Interpretability (SHAP, LIME)

Financial Sentiment Analysis (News, Earnings Reports, Tweets)

Information Extraction (NER, Topic Modeling)

Advanced Text Representation (Word2Vec, Transformers - BERT)

Integrating NLP Signals into Trading Models

Reinforcement Learning for Optimal Trading

Portfolio Optimization with ML (Covariance Estimation)

Leveraging Alternative Data (Satellite Imagery, Web Data)

AI Ethics & Regulation in Finance

Key Concepts - Mean and Variance

Mean (μ) as the "expected value" or "center" of a distribution. Variance (σ²) as the measure of "spread" or "messiness" of all possible outcomes.

Standard Deviation (The "Intuitive" Spread)

Standard Deviation (σ = √σ²) as the "fix" for variance. Why we use σ instead of σ².

The Normal Distribution N(μ, σ²)

The "bell curve" as the mathematical "recipe" for pure randomness. How μ and σ² affect its shape.

Calculus Review - Derivatives (The "Slope")

A review of df/dt as the "instantaneous rate of change" and an introduction to partial derivatives.

Calculus Review - Integrals (The "Sum")

Reviewing ∫f(x)dx as "summing up an infinite number of tiny pieces" to find a total area or total change.

The Rules of "Normal" Infinitesimals

Why we can ignore second-order terms like (Δt)² in normal calculus. This is the key rule that is about to be broken.

Why Normal Calculus Fails

Discover the "infinitely wiggly" path of a stock and why df/dt (a "slope") doesn't exist for a random path.

Brownian Motion (Wiener Process Wt)

Defining our 'perfectly random' path and its 4 key properties: W₀=0, continuous path, independent increments, and the key rule: Wt - Ws ~ N(0, t-s).

The "Weird" Scaling Property

Deriving the "typical size" of a single random step. Why does Wt ~ N(0, t) mean the Standard Deviation is √t?

A Model for Stocks (Geometric Brownian Motion)

Building the SDE: dSₜ = μSₜdt + σSₜdWₜ. The "Drift" (μ) is the river's current, and "Diffusion" (σ) is the "jiggliness" of the water.

The Failure of Path Length

Proving why a Brownian Motion has no derivative by showing that the "path length," Σ|ΔW|, is proportional to Σ√Δt, which goes to infinity.

Quadratic Variation (The "Aha!" Moment)

If Σ|ΔW| fails, what about Σ(ΔW)²? Showing that Σ(ΔW)² is proportional to Σ(√Δt)² = ΣΔt = T. The "squared" path is finite.

The New Rules of Algebra

Turning our discovery into rules for our infinitesimal steps: (dt)²=0, dt·dWt=0, and the "Weird Rule" (dWt)²=dt.

The Itô Integral

How do these new rules change integration? We'll solve ∫WₜdWₜ to show that ∫WₜdWₜ = ½Wₜ² - ½T. That extra term is the "cost of randomness."

Prerequisite - The Taylor Expansion

A review of our key tool. How to get Δf ≈ f'(x)Δx + ½f''(x)(Δx)². This is the foundation for Itô's Lemma.

Itô's Lemma (Simple Case, for f(Wt))

What's the chain rule for a function of just Wt? We'll plug ΔW into our Taylor formula and apply our "weird algebra" rules. Result: df = f'(Wₜ)dWₜ + ½f''(Wₜ)dt.

Prerequisite - The 2-Variable Taylor Expansion

What if our function depends on two variables, like f(t, S_t)? We will show all 5 terms of the expansion.

Itô's Lemma (The Full Version, for f(t, Xt))

The "final boss" of our theory. We'll combine all our tools: the 2-variable Taylor expansion, the SDE (dSt = a dt + b dWt), and our "weird algebra" rules. We will go step-by-step, showing which of the 5 Taylor terms "survive" and which "die" (go to 0). Result: df = (∂f/∂t + a∂f/∂S + ½b²∂²f/∂S²)dt + (b∂f/∂S)dWt

Understanding the Full Formula (Translating Math to Finance)

A physical meaning for all 4 terms in the full Itô's Lemma. (Time Decay, Drift Effect, Itô Correction, and the new Random Part).

Lesson 4.1: The "Magic Portfolio"

Constructing the delta-hedged portfolio (Π = -V + ΔS) and finding its SDE, dΠ.

Lesson 4.2: Eliminating Risk

How to choose a "magic" value for Δ that makes the entire dWt ("Random") bin equal zero. We'll solve for Δ and find that Δ = ∂V/∂S.

Lesson 4.3: Eliminating Drift μ

Plugging our "magic" Δ back into the drift part of our portfolio to watch the subjective μ term vanish completely.

Lesson 4.4: The "No-Free-Lunch" Argument & The Final PDE

Our portfolio is now risk-free, so it must earn the risk-free rate, r. We'll set our two equations for dΠ equal to each other and rearrange to get the Black-Scholes-Merton PDE.

The Solution (The Black-Scholes Formula)

Explaining the physical meaning of the famous formula: (Expected Benefit) - (Expected Cost).

Delta (Δ): The "Speed" of an Option

Δ = ∂V/∂S. How much V moves when S moves $1. How to use Delta to hedge.

Gamma (Γ): The "Acceleration"

Γ = ∂²V/∂S². How much Δ moves when S moves $1. Why Gamma is the "risk of your hedge."

Vega (ν): The "Jiggle Risk"

ν = ∂V/∂σ. How much V moves when volatility σ changes by 1%. Why "panic is good" for an option holder.

Theta (Θ): The "Melting Ice Cube"

Θ = ∂V/∂t. The rate of time decay. The cost of waiting.

Rho (ρ): The "Interest Rate Risk"

ρ = ∂V/∂r. The sensitivity to changes in the risk-free rate.

The "Other Way" - Risk-Neutral Valuation

Introducing the risk-neutral measure Q and the fundamental theorem of asset pricing.

The "Computer Way" - Monte Carlo Methods

Using simulation to price complex derivatives that have no closed-form solution.

Fixing σ - Stochastic Volatility (Heston Model)

Modeling volatility itself as a random process to capture market dynamics like the "volatility smile."

Fixing "No Jumps" - Jump-Diffusion (Merton Model)

Adding "jumps" to our random walk to account for sudden market crashes and shocks.

Fixing r - Stochastic Interest Rates (Vasicek & CIR)

Modeling the risk-free rate itself as a random process for pricing long-term bonds and derivatives.