Lesson 4.3: Upgrading to a 3D World: Multiple Linear Regression (MLR)
We now move beyond a single cause and effect. This lesson upgrades our engine from Simple to Multiple Linear Regression, allowing us to model an outcome using several predictors at once. We'll discover the core power of MLR—the ability to 'control for' other variables—and master the essential matrix algebra that makes it possible.
Part 1: The Power of "Controlling For" Variables
In the real world, outcomes rarely have a single cause. A stock's return is driven by the market, interest rates, and sector performance. An exam score is driven by study hours, attendance, and prior GPA.
Multiple Linear Regression (MLR) allows us to isolate the effect of one variable while holding others constant. This is its superpower.
The Core Intuition: The Ice Cream & Shark Attack Problem
Imagine you collect data and find a strong positive correlation between monthly ice cream sales () and shark attacks (). A simple regression would show and be statistically significant.
- Naive Conclusion: "Ice cream causes shark attacks!"
- The Problem: There is a lurking, unobserved variable: **average monthly temperature** (). When it's hot, more people buy ice cream, AND more people go swimming.
- The MLR Solution: By including both variables in the model, , the algorithm can see that once temperature is accounted for, the effect of ice cream sales on shark attacks becomes zero ().
MLR allows us to untangle these complex relationships by estimating the effect of each variable *ceteris paribus*—all other things being equal.
Part 2: The Model in Matrix Form
As we add more variables, the calculus becomes impossible. We must use the language of Linear Algebra. The MLR model is written as:
Let's visually deconstruct this for a model with data points and predictors.
- : The vector of your outcome variable.
- : The **Design Matrix**. Each row is an observation (a student, a day). Each column is a variable (a "feature"). The first column is always 1s to represent the intercept.
- : The vector of parameters we want to estimate.
Part 3: The Master Formula and Its Interpretation
The beauty of the matrix form is that the OLS solution we derived in the last lesson works perfectly, unchanged. The formula doesn't care if has 2 columns or 2000 columns.
The OLS Estimator for MLR
In MLR, the interpretation of a coefficient gains a critical new phrase:
" is the estimated change in for a one-unit increase in , **holding all other variables in the model constant**."
This is the mathematical equivalent of "controlling for" the other factors.
Part 4: The Problem with R² in MLR
In MLR, the standard is a flawed metric. Why? Adding *any* variable to the model, even a column of random garbage, will **never cause the R² to decrease**. At worst, the SSR stays the same; usually, it goes down a tiny bit just by chance, pushing R² up.
This encourages overfitting. We need a metric that penalizes us for adding useless variables. That metric is **Adjusted R²**.
Definition: Adjusted R-Squared
Adjusted R² modifies the regular R² by adjusting the SSR and TSS by their respective degrees of freedom.
- is the number of observations.
- is the number of predictors.
Adding a new variable increases , which increases the penalty term. If the new variable doesn't reduce SSR by a large enough amount to offset the penalty, the Adjusted R² will actually go **down**. This makes it a much more honest measure of a model's explanatory power.
What's Next? The Derivation
We've defined the MLR model and seen the "master formula" that solves it. The structure is identical to the one we proved in the last lesson.
In the next lesson, we will briefly confirm that the matrix calculus derivation for MLR is indeed the same as for SLR, solidifying our understanding of the Normal Equations and the power of the matrix approach.