Linear Independence

Identifying and removing redundant vectors.

In our last lesson, we saw that when we took two vectors pointing in different directions, their span was the entire 2D plane. But when we took two collinear vectors, the second vector was redundant; it didn't add anything new to our span.

This idea of redundancy is central to linear algebra. A set of vectors is said to be linearly independent if no vector in the set can be written as a linear combination of the others. Conversely, a set is linearly dependent if at least one vector is a combination of the others.

The Formal Definition (The "Zero" Test)

How the Test Works

Consider the vector equation:

c1v1+c2v2++cnvn=0c_1v_1 + c_2v_2 + \dots + c_nv_n = \vec{0}

This asks: "Is there a linear combination of our vectors that results in the zero vector?" There is always one trivial solution: set all the scalars to zero. The real question is: **Is the trivial solution the *only* solution?**

  • If the only way to get the zero vector is the trivial solution (all cc's are zero), then the set of vectors is linearly independent.
  • If there is *any* non-trivial solution (where at least one cc is not zero), then the set is linearly dependent.

Visualizing Linear Dependence

  • In 2D: Two vectors are linearly dependent if they are collinear. Three or more vectors in 2D are *always* linearly dependent.
  • In 3D: Three vectors are linearly dependent if they are coplanar (they lie on the same plane).
Summary: The Efficiency Test
  • Linearly Independent: A set of vectors containing no redundant information. Each vector adds a new dimension to the span.
  • Linearly Dependent: A set of vectors containing redundant information. At least one vector is a combination of the others.