All Topics
Browse every topic available on QuantPrep. Use the search to find exactly what you're looking for.
Interactive tools for hands-on probability and statistics analysis.
Vectors, matrices, and eigenvalues. The language of data.
The science of collecting, analyzing, and interpreting data.
Master random variables, distributions, and stochastic processes.
ARIMA, GARCH, and forecasting market movements.
Building predictive models for financial markets.
Vectors as geometric arrows vs. vectors as ordered lists of numbers (the data science view).
Addition, subtraction (tip-to-tail rule), and scalar multiplication (stretching/shrinking).
The dot product as a measure of 'projection' or 'agreement.' L1 and L2 norms as measures of length/magnitude. Cosine similarity as a practical application.
The concept of perpendicular vectors (dot product = 0) and its meaning: independence.
A matrix as a container for data (a collection of vectors) vs. a matrix as a linear transformation that moves, rotates, and scales space.
Addition, scalar multiplication, and the transpose.
Taught not just as a rule, but as the composition of linear transformations. This explains why AB ≠ BA.
Identity matrix (the 'do nothing' operation), inverse matrix (the 'undo' operation), diagonal, triangular, and symmetric matrices.
What can you build with a set of vectors?
Identifying and removing redundant vectors.
The minimal set of vectors needed to define a space and the concept of its dimension.
Formalizing these concepts. A subspace as a 'plane' or 'line' within a higher-dimensional space that passes through the origin.
Understanding Ax=b from the row picture (intersection of planes) and the column picture (linear combination of columns).
The core algorithm for solving linear systems. Row operations, row echelon form (REF).
Identifying if a system has a unique solution, no solution, or infinitely many solutions from its REF.
The ultimate, unique 'answer sheet' for a linear system, removing the need for back-substitution.
The 'matrix version' of Gaussian Elimination. Solving Ax=b becomes a fast, two-step process of forward and back substitution.
The space of all possible outputs of A. The concept of rank as the "true dimension" of the output space.
The space of all inputs that map to the zero vector. Its connection to multicollinearity in data.
Completing the picture of the four fundamental subspaces.
How the four subspaces relate to each other and partition the input and output spaces.
The determinant as the scaling factor of area/volume.
Cofactor expansion and the properties of determinants. A determinant of zero means the matrix squishes space into a lower dimension (i.e., it's not invertible).
Finding the 'special' vectors that are only scaled by a transformation, not rotated off their span (Ax = λx).
The calculation behind eigenvalues: solving det(A - λI) = 0.
Decomposing a matrix into its core components: 'changing to the eigenbasis, scaling, and changing back.'
Using eigenvalues for tasks like calculating matrix powers (e.g., for Markov chains).
For symmetric matrices (like covariance matrices), the eigendecomposition is especially beautiful and stable (A = QDQᵀ). This is the theoretical foundation of PCA.
A highly efficient specialization for symmetric, positive-definite matrices, often used in optimization and financial modeling.
Introducing the goal of minimizing the error ||Ax - b||.
Finding the closest point in a subspace (the Column Space) to an external vector.
Deriving AᵀAx̂ = Aᵀb from the projection geometry. This is the engine of Linear Regression.
Understanding why AᵀA can be ill-conditioned and lead to numerical errors.
An algorithm for creating a "nice" orthonormal basis from any starting basis.
Using Gram-Schmidt to factor A=QR. Show how this makes solving the least squares problem trivial (R = Qᵀb) and numerically robust.
The ultimate decomposition (A = UΣVᵀ) that works for any matrix and finds orthonormal bases for all four fundamental subspaces simultaneously.
A direct, powerful application of SVD on the data matrix for dimensionality reduction.
Low-rank approximation for noise reduction, and the core ideas behind recommendation systems.
Deriving portfolio variance from first principles using linear algebra.
Using linear algebra to construct optimal portfolios.
Understanding the relationship between risk and expected return.
Connecting "no free lunch" to the geometry of vector spaces.
Modeling dynamic systems like credit ratings with transition matrices.
Duration and convexity as linear algebraic concepts.
A comprehensive guide to choosing the right statistical test.
A practical guide to deciding if your results are a real breakthrough or just random noise.
Compares the means of two groups, assuming normal distribution.
Compares means of large samples (n>30) with known population variance.
Compares the averages of three or more groups.
Compares the variances (spread) of two or more groups.
Measures the linear relationship between two continuous variables.
Analyzes categorical data to find significant relationships.
Alternative to the T-Test when data is not normally distributed.
Alternative to ANOVA for comparing three or more groups.
Alternative to the paired T-Test for repeated measurements.
Measures the monotonic relationship between two ranked variables.
The non-parametric alternative to a repeated-measures ANOVA.
Tests if a sample is drawn from a specific distribution.
The detective work of data science.
Interactive guide to mean, median, skewness, and kurtosis.
Discover how order emerges from chaos.
Understanding the range where a true value likely lies.
Calculate probabilities from Z-scores and vice-versa.
The ubiquitous "bell curve."
Using random simulation to solve complex problems.
Breaking down a time series into its core components.
Measuring how a time series correlates with its past values.
Modeling the changing volatility of financial returns.
Finding the optimal portfolio for a given level of risk.
Dynamically estimating the state of a system from noisy data.
The calculus of random walks, essential for derivatives pricing.
Understanding the building blocks of probability.
The three fundamental rules that govern all of probability.
How the occurrence of one event affects another.
Updating your beliefs in the face of new evidence.
Describing the probabilities of discrete outcomes.
Calculating the center and spread of a random variable.
Exploring key models for discrete random events.
A powerful tool for analyzing distributions.
Describing the probabilities of continuous outcomes.
Applying calculus to find the moments of continuous variables.
Exploring key models for continuous random events.
Extending moment generating functions to continuous cases.
Modeling the behavior of multiple random variables at once.
Isolating one variable's behavior from a joint distribution.
Measuring how two random variables move together.
Defining when two variables have no influence on each other.
Mastering the bell curve and standardization.
Understanding how normal variables combine.
The cornerstone of modern portfolio theory.
Dissecting multi-asset models.
Putting the multivariate normal to practical use.
The essential tool for inference with small samples.
The basis for tests of variance and goodness-of-fit.
The key to comparing variances between two groups (ANOVA).
Why casino averages are so stable.
Why the normal distribution is everywhere.
Tools for approximating the distribution of functions of random variables.
Distinguishing between a function of data and a guess for a parameter.
Evaluating the accuracy of estimators.
Finding the "best" possible unbiased estimator.
Properties of estimators that improve with more data.
A straightforward technique for finding estimators.
The most important method for parameter estimation in finance.
The practical side of implementing MLE.
A framework for creating intervals for any parameter.
Using t, χ², and Z pivotal quantities to build intervals.
The fundamental setup of all hypothesis tests.
Finding the most powerful test for a given significance level.
A general method for comparing nested models.
The two equivalent approaches to making a statistical decision.
Modeling the relationship between two variables.
The calculus behind finding the "best fit" line.
Assessing how well your linear model fits the data.
Extending SLR to multiple predictors using linear algebra.
The matrix algebra for solving a multiple regression problem.
The theoretical justification for using OLS.
Testing the significance of a single predictor.
Testing the significance of a group of predictors or the entire model.
The critical assumptions that must hold for OLS to be valid.
Diagnosing when predictors are too correlated with each other.
Handling non-constant variance in the error terms.
Detecting patterns in the error terms over time.
Decomposing the components of a time series.
The most important property for modeling time series data.
The key tools for identifying the structure of a time series.
A class of models for forecasting time series data.
Modeling the changing volatility of financial returns.
Using random simulation to solve complex problems.
A powerful resampling method for inference.
A related method for bias and variance estimation.
The mathematical foundation of efficient markets.
The standard model for stock price paths.
Extending linear models to non-normal data.
Modeling probabilities, such as the probability of default.
Modeling the frequency of events.
A technique to handle multicollinearity and prevent overfitting.
A powerful method for automatically selecting important variables.
The gold standard for selecting model parameters.
An alternative framework for statistical inference.
The computational engine behind modern Bayesian analysis.
The algorithms that power MLE and machine learning.
Practical coding examples of core statistical techniques.
A framework for updating beliefs with new evidence.
Modeling a single trial with two outcomes.
Modeling a series of success/fail trials.
Modeling the frequency of rare events.
Modeling trials until the first success.
Modeling sampling without replacement.
Modeling trials until a set number of successes.
Modeling where all outcomes are equally likely.
Generalizing the Binomial for multiple outcomes.
Modeling waiting times and skewed data.
Modeling probabilities, percentages, and proportions.
Modeling the time between events in a Poisson process.
Modeling extreme events and 'fat-tailed' phenomena.
Modeling with a sharp peak and 'fat tails'.
Comparing variances between two or more samples.
The backbone of hypothesis testing with small sample sizes.
Modeling time-to-failure and event durations.
A key distribution in machine learning and growth modeling.
The distribution of the sum of squared standard normal deviates.
Understanding the building blocks of probability.
Techniques for counting outcomes and possibilities.
How the occurrence of one event affects another.
Updating beliefs in the face of new evidence.
Mapping outcomes of a random process to numbers.
Calculating the center, spread, and shape of a distribution.
Exploring Bernoulli, Binomial, and Poisson distributions.
Exploring Uniform, Normal, and Exponential distributions.
Modeling the behavior of multiple random variables at once.
Measuring how two random variables move together.
Why casino averages are so stable.
Why the normal distribution is everywhere.
Finding the distribution of a function of a random variable.
A powerful tool for analyzing distributions.
Quantifying information with Entropy and KL Divergence.
Understanding random phenomena that evolve over time.
Modeling memoryless state transitions.
Modeling the timing of random events.
The mathematical foundation of stock price movements.
The rigorous foundation of modern probability.
A more powerful theory of integration.
The formal model of a fair game.
The calculus of random walks, essential for derivatives pricing.
Sharpen your calculation speed and accuracy for interviews.