When we took two vectors pointing in different directions, v=[1,2] and w=[−3,1], their span was the entire 2D plane.
But when we took two vectors pointing in the same direction, v=[1,2] and u=[2,4], their span was just a line. The second vector, u, was **redundant**. It didn't add anything new to our span.
This idea of redundancy is central to linear algebra. We need a formal way to describe it, and that language is **Linear Independence**.
A set of vectors is said to be **linearly independent** if no vector in the set can be written as a linear combination of the others.
Conversely, a set of vectors is **linearly dependent** if at least one vector in the set *is* a linear combination of the others—making it redundant.
The Formal Definition (The "Zero" Test)
The "can one be made from the others" intuition is great, but there's a more robust, mathematical way to define this. Consider the vector equation:
c1v1+c2v2+⋯+cnvn=0
This asks: "Is there a linear combination of our vectors that results in the **zero vector** (the vector [0,0,…] at the origin)?"
There is always one trivial solution: just set all the scalars to zero. 0⋅v1+0⋅v2+⋯=0. This is the "boring" solution.
The real question is: **Is the trivial solution the *only* solution?**
If the only way to get the zero vector is the trivial solution (all c's are zero), then the set of vectors {v1,v2,…} is **linearly independent**.
If there is *any* non-trivial solution (where at least one c is not zero), then the set is **linearly dependent**.
Why This Test Works
Let's see this in action with our dependent set {v,u} where v=[1,2] and u=[2,4]. We know that u=2v, which can be rewritten as:
2v−u=0
This is 2⋅v+(−1)⋅u=0. This is a non-trivial linear combination (the scalars are 2 and -1) that equals the zero vector! This proves that the set is linearly dependent. It captures the redundancy perfectly.
Visualizing Linear Dependence
In 2D:
Two vectors are linearly dependent if they are **collinear** (they lie on the same line).
Three or more vectors in 2D are *always* linearly dependent. You can always make one from the other two.
In 3D:
Two vectors are linearly dependent if they are collinear.
Three vectors are linearly dependent if they are **coplanar** (they lie on the same plane). One vector's contribution is redundant because you can already reach its tip using a combination of the other two.
The Core Idea for Quants & ML
Linear dependence in your features is called **multicollinearity**, and it's a major headache for many statistical models, especially linear regression.
Imagine you're predicting a company's stock price using these three features:
f1: Company's quarterly revenue in USD.
f2: Company's quarterly revenue in EUR.
f3: Number of employees.
The feature vectors for f1 and f2 are almost perfectly linearly dependent. A regression model would have a terrible time trying to figure out how to assign importance. Identifying and removing linearly dependent features is a crucial step in building robust quantitative models.
Summary: The Efficiency Test
Linearly Independent: A set of vectors containing no redundant information. Each vector adds a new dimension to the span. The only linear combination that equals the zero vector is the trivial one.
Linearly Dependent: A set of vectors containing redundant information. At least one vector is a combination of the others and does not expand the span. There is a non-trivial linear combination that equals the zero vector.
Up Next: We will combine the ideas of Span and Linear Independence to define a **Basis**—the perfect, minimal set of building blocks for any vector space.