Calculation and Properties of the Determinant

The machinery for computing and reasoning about the scaling factor of space.

In our last lesson, we developed a deep geometric intuition for the determinant. We know it's the scaling factor of a transformation. Now, it's time to build the machinery to compute this magical number for any square matrix.

We'll start with the simple cases and build up to a general, powerful formula. Then, we'll uncover the properties that let us reason about determinants without always resorting to heavy computation.

The 2x2 Case: The Formula We Already Know

For a 2x2 matrix, the formula is simple and is the one we discovered geometrically.

A=[abcd]    det(A)=adbcA = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \implies \det(A) = ad - bc

This formula represents the signed area of the parallelogram formed by the transformed basis vectors [a,c][a, c] and [b,d][b, d].

Example:

A=[3124]A = \begin{bmatrix} 3 & -1 \\ 2 & 4 \end{bmatrix}

det(A)=(3)(4)(1)(2)=12(2)=14\det(A) = (3)(4) - (-1)(2) = 12 - (-2) = 14. This transformation scales area by a factor of 14 and preserves orientation.

The 3x3 Case: Introducing Cofactor Expansion
This method breaks down the determinant of an `n x n` matrix into a combination of determinants of smaller `(n-1) x (n-1)` matrices.

For a general 3x3 matrix expanded along the first row, the formula is:

det(A)=aC11+bC12+cC13\det(A) = a \cdot C_{11} + b \cdot C_{12} + c \cdot C_{13}

The CijC_{ij} terms are **cofactors**, where Cij=(1)i+jdet(Mij)C_{ij} = (-1)^{i+j} \cdot \det(M_{ij}). MijM_{ij} is the **minor**, the smaller matrix left when you delete row `i` and column `j`. The (1)i+j(-1)^{i+j} term creates a checkerboard pattern of signs:

[+++++]\begin{bmatrix} + & - & + \\ - & + & - \\ + & - & + \end{bmatrix}

Putting it all together:

det(A)=aefhibdfgi+cdegh\det(A) = a \begin{vmatrix} e & f \\ h & i \end{vmatrix} - b \begin{vmatrix} d & f \\ g & i \end{vmatrix} + c \begin{vmatrix} d & e \\ g & h \end{vmatrix}
=a(eifh)b(difg)+c(dheg)= a(ei - fh) - b(di - fg) + c(dh - eg)
Key Properties of the Determinant

1. Identity Matrix: det(I)=1\det(I) = 1. (The "do-nothing" transformation has a scaling factor of 1).

2. Row Swaps: Swapping two rows flips the sign of the determinant.

3. Row Replacement: Adding a multiple of one row to another does not change the determinant. This is why Gaussian elimination is so useful!

4. Triangular Matrix: The determinant is the product of the diagonal entries.

5. Invertibility: AA is invertible if and only if det(A)0\det(A) \neq 0.

6. Multiplicative Property: det(AB)=det(A)det(B)\det(AB) = \det(A) \cdot \det(B). The scaling factor of composed transformations is the product of their individual scaling factors.

7. Inverse: det(A1)=1/det(A)\det(A^{-1}) = 1 / \det(A).

8. Transpose: det(AT)=det(A)\det(A^T) = \det(A).

A Smarter Way to Calculate
Brute-force cofactor expansion is very slow for large matrices. The most efficient way to calculate a determinant is to combine row operations with the triangular matrix property.

To find det(A)\det(A) for A=[246135124]A = \begin{bmatrix} 2 & 4 & 6 \\ 1 & 3 & 5 \\ 1 & 2 & 4 \end{bmatrix}:

  1. Use row operations to get to REF, keeping track of changes.
    • Factor 2 from R1: det(A)=2det([123135124])\det(A) = 2 \cdot \det(\begin{bmatrix} 1 & 2 & 3 \\ 1 & 3 & 5 \\ 1 & 2 & 4 \end{bmatrix})
    • R2R2R1R_2 \to R_2 - R_1 and R3R3R1R_3 \to R_3 - R_1 (no change to determinant).
  2. The matrix becomes upper triangular: U=[123012001]U = \begin{bmatrix} 1 & 2 & 3 \\ 0 & 1 & 2 \\ 0 & 0 & 1 \end{bmatrix}. Its determinant is the product of the diagonals: 1×1×1=11 \times 1 \times 1 = 1.
  3. The final determinant is det(A)=2×1=2\det(A) = 2 \times 1 = 2. This is far faster than a full cofactor expansion.

**Up Next in Module 4:** Armed with the power of the determinant, we are ready to tackle the central topic of the second half of linear algebra: **Eigenvalues and Eigenvectors**.