Applications of Eigen-analysis
Markov Chains & Dynamic Systems
We've developed the complete theory of eigenvalues and eigenvectors (Av=λv) and the powerful computational tool of diagonalization (A=PDP−1).
Now, we put it to work. The true power of eigen-analysis is in understanding systems that evolve over time. These are called dynamic systems. We will explore one of the most famous and useful examples: Markov Chains.
What is a Dynamic System?
A dynamic system is any system that changes from one state to another over discrete time steps according to a fixed rule. If that rule is a linear transformation, we can model it with a matrix.
Let xk be the state of a system at time step k. The state at the next time step, xk+1, is given by:
xk+1=Axk - x0 is our initial state.
- x1=Ax0
- x2=Ax1=A(Ax0)=A2x0
- In general: xk=Akx0
The state of the system at any future time k is determined by the k-th power of the transformation matrix A applied to the initial state x0.
Application: Markov Chains
A Markov Chain is a specific type of dynamic system used to model probabilities. It describes the movement of something between a finite number of states.
The Rules of a Markov Matrix (or Transition Matrix):
- All entries are between 0 and 1 (they represent probabilities).
- The entries in each column sum to 1 (the total probability of moving from a given state must be 100%).
Example: A Simple Market Model
Imagine a city where people choose between two streaming services, "StreamFlix" and "ConnectPlus." Each month, some people switch.
- StreamFlix keeps 90% of its customers, and 10% switch to ConnectPlus.
- ConnectPlus keeps 80% of its customers, and 20% switch to StreamFlix.
We model this with a transition matrix A:
A=[0.90.10.20.8] Let x0=[0.6,0.4]T be the initial market share. After one month:
x1=Ax0=[0.90.10.20.8][0.60.4]=[0.620.38] StreamFlix will have 62% and ConnectPlus will have 38%.
The Big Question: What Happens in the Long Run?
The **steady state** of this system is the eigenvector corresponding to the eigenvalue
λ=1.
We need to find the eigenvector for λ=1 by solving (A−I)v=0.
A−I=[0.9−10.10.20.8−1]=[−0.10.10.2−0.2] This gives the equation −0.1s+0.2c=0, which simplifies to s=2c. The eigenvector is of the form [2c,c]. We can choose `c=1`, giving a basis vector v1=[2,1].
To make this a probability vector (components sum to 1), we normalize it:
xss=[2/31/3]≈[0.6670.333] Conclusion: The market will stabilize with StreamFlix holding 2/3 of the market and ConnectPlus holding 1/3, regardless of the initial state.
Summary: Eigen-analysis in Action
- Dynamic systems xk+1=Axk can be understood by analyzing the matrix `A`.
- Diagonalization (Ak=PDkP−1) is key to computing long-term states.
- Markov Chains model probabilistic state changes.
- The steady state of a Markov chain is the eigenvector for the eigenvalue λ=1.
- Other eigenvalues with magnitude less than 1 determine the rate of convergence to the steady state.
**Up Next:** We will explore the special case of **Symmetric Matrices** and the **Spectral Theorem**.