Lesson 4.2: The Performance Review: R-Squared and Residuals
We've built the OLS model. Now we must judge its performance. This lesson introduces the single most famous statistic in data analysis, R-Squared (R²), by proving the fundamental identity of variance decomposition. We'll also explore the powerful geometric properties of the OLS residuals.
Part 1: The Pie of Total Variation
We've found the "best" possible line, but "best" does not mean "good." If the data is a random cloud, our line is useless. We need a way to measure how much of the "story" in our dependent variable () is actually told by our model.
The 'Pie Chart' Analogy
Imagine the total "variation" in your Y variable is a giant pie. This total variation is measured by the **Total Sum of Squares (TSS)**, which is how much the data points vary around their own average, .
Our OLS model's job is to "eat" as much of this pie as it can. The slice it eats is the **Explained Sum of Squares (ESS)**. The slice left over is the **Sum of Squared Residuals (SSR)**.
The big question is: What percentage of the total pie did our model successfully explain?
Part 2: R-Squared and the Variance Decomposition
This leads us to the definition of the most famous metric in statistics.
Definition: R-Squared (R²), The Coefficient of Determination
is the proportion of the total variation in (the whole pie) that is explained by our regression model (the slice we ate).
This definition only makes sense if the "pie" can be perfectly split. We must now prove the fundamental identity of OLS: that the total variation is *exactly* the sum of the explained and unexplained parts.
Proof: The Variance Decomposition Identity (TSS = ESS + SSR)
Step 1: Decompose the total deviation. We start with the total deviation for one data point, , and cleverly add and subtract our predicted value, .
The first term is the residual (). So, .
Step 2: Square and Sum. We square both sides and sum over all observations.
Expanding the right side gives :
Step 3: Prove the cross-product is zero. We need to show that the interaction term, , is zero. This is a magical property of OLS.
From the first-order conditions of our derivation in Lesson 4.1, we know that the sum of the residuals . So the second term is zero. We can also prove that the residuals are uncorrelated with the predicted values, meaning . Thus, the entire cross-product term is zero.
Conclusion: We are left with the beautiful identity:
The Alternative Formula for R²
Because , we can express R² in terms of the unexplained variance:
This is often how it's calculated in software. It answers: "What percentage of the pie is NOT left over?"
Part 3: The Geometric Properties of OLS Residuals
The Orthogonality Property
The fact that the cross-product term in our proof was zero is no accident. It's a result of the **Orthogonality** property of OLS. In the language of linear algebra, OLS guarantees that:
- The vector of residuals () is orthogonal (uncorrelated) to the vector of predictors ().
- The vector of residuals () is orthogonal (uncorrelated) to the vector of fitted values ().
What this means: OLS perfectly separates the data into two perpendicular components: the "signal" (), which is a linear function of , and the "noise" (), which is completely unrelated to . Your model has extracted every last drop of linear information.
What's Next? Expanding to Multiple Variables
We have now built, derived, and learned how to evaluate the Simple Linear Regression model. This is the complete toolkit for analyzing a relationship between two variables.
But the real world is complex. An asset's return isn't just affected by the market; it might be affected by interest rates, oil prices, and currency fluctuations. An exam score isn't just affected by study hours; it's affected by sleep, prior GPA, and attendance.
In the next lesson, we will upgrade our engine from a single predictor to handle multiple predictors at once by introducing the **Multiple Linear Regression (MLR) model in Matrix Form**.