In our last lesson, we focused on the **Column Space**—the set of all possible *outputs* `b` for which the equation `Ax = b` has a solution. It represents the "reach" of our matrix transformation.
Now, we ask a fundamentally different and more profound question. Let's consider the simplest possible target: the zero vector, `0`. What if we set `b=0` and try to solve the equation `Ax = 0`?
This special type of system is called a **homogeneous system**. It always has at least one solution, the "trivial" one, where `x` is the zero vector (`A * 0 = 0`). But are there other, more interesting solutions?
The set of **all possible solutions `x`** to the homogeneous equation `Ax = 0` is called the **Null Space** of the matrix `A`. It is also sometimes called the **Kernel**.
Null Space N(A)={all vectors x such that Ax=0}
The Intuition: What Gets "Squashed" to Zero?
The Null Space is the set of all input vectors that get **squashed, collapsed, or annihilated** into the single point of the origin by the transformation `A`.
Think of a matrix `A` that represents a projection from 3D space onto the xy-plane. The input vector `[1, 2, 3]` might land on the output vector `[1, 2, 0]`. But what input vector lands on `[0, 0, 0]`?
The input `[0, 0, 5]` gets projected straight down to the origin.
So does `[0, 0, -2]` and `[0, 0, 1000]`.
In fact, **any vector on the z-axis** gets squashed to the origin.
In this case, the entire z-axis is the **Null Space** of the projection matrix. It is the subspace of all inputs that are "lost" or "zeroed out" by the transformation.
The Null Space is not a random collection of vectors. It is always a **subspace** of the input space.
Finding a Basis for the Null Space
The Null Space is defined by the solutions to `Ax=0`. How do we find these solutions? With the most powerful tool in our toolbox: **Gauss-Jordan Elimination (to RREF)**.
Example:
Let's find the Null Space of this matrix `A`:
A=1232463784911
Step 1: Reduce `A` to its Reduced Row Echelon Form (RREF).
We are solving `Ax=0`. We can just focus on `A` because the zero vector on the right side of an augmented matrix never changes.
After full elimination, the RREF matrix `R` is:
R=100200010110
Step 2: Identify pivot variables and free variables.
Pivot Columns: 1 and 3. So, `x₁` and `x₃` are pivot variables.
Free Columns: 2 and 4. So, `x₂` and `x₄` are **free variables**.
Step 3: Write the general solution to `Rx = 0`.
Let `x₂ = s` and `x₄ = t`, where `s` and `t` can be any real numbers.
From Row 1: x1+2x2+x4=0⟹x1=−2x2−x4=−2s−t
From Row 2: x3+x4=0⟹x3=−x4=−t
Step 4: Decompose the solution into a linear combination of vectors.
This is the magic step. We write our solution vector `x` and separate the parts for `s` and `t`.
Every vector `x` in the Null Space is a linear combination of these two specific vectors. These two vectors form a **basis for the Null Space of `A`**.
The dimension of the Null Space (called the **nullity**) is 2, which is exactly the number of free variables we had.
The Meaning of the Null Space: Redundancy
The Null Space is the precise mathematical description of **redundancy** in the columns of `A`.
A non-zero vector `x` in the Null Space means that `Ax = 0`, which is `x₁·(col 1) + x₂·(col 2) + ... = 0`. This is the very definition of **linear dependence** among the columns of `A`. The basis vectors we found for the Null Space give us the exact "recipes" for how the columns are dependent on each other.
Example Decoded
For our example, `s=1, t=0` gives the null space vector `x = [-2, 1, 0, 0]`. This means: −2⋅(col 1)+1⋅(col 2)+0⋅(col 3)+0⋅(col 4)=0 This tells us that `col 2 = 2 * col 1`. The Null Space has revealed the specific redundancy between the columns.
**Up Next:** We will complete the picture by introducing the **Row Space** and the **Left Null Space**, and see how all four fundamental subspaces fit together.