So far in this module, our analysis has been largely static. We've looked at a snapshot of the market to calculate risk, optimize portfolios, and find relationships. But markets are not static; they are dynamic, constantly in motion.
How do we model systems that change over time? How do we predict the future state of a system based on its current state?
For this, we turn to a wonderfully elegant tool called a Markov Chain. It is a perfect marriage of probability theory and linear algebra, and it allows us to model everything from the weather, to a customer's journey, to the migration of credit ratings.
Part 1: The Core Idea - Memoryless State Transitions
A Markov Chain describes a system that can be in one of several states. The system transitions between these states in discrete time steps (e.g., every day, month, or year).
The defining feature—the "Markov Property"—is that the probability of transitioning to any future state depends only on the current state, not on the sequence of events that preceded it. The system is "memoryless."
The Perfect Financial Example: Credit Rating Migrations
Let's model the credit quality of a universe of companies. We can define a few simple states a company can be in:
State 1: Investment Grade (IG)
State 2: High Yield (HY)
State 3: Defaulted (D)
Each year, a company can either stay in its current state or transition to another. We can represent the probabilities of these transitions in a grid.
From
To: IG
To: HY
To: D
IG
90%
8%
2%
HY
5%
85%
10%
D
0%
0%
100%
This grid tells us, for example, that an Investment Grade company has a 90% chance of staying IG next year, an 8% chance of being downgraded to High Yield, and a 2% chance of defaulting.
Notice that a Defaulted company stays defaulted with 100% probability. This is called an absorbing state.
Part 2: The Linear Algebra Connection - The Transition Matrix
We represent this grid of probabilities as a matrix, the Transition Matrix `P`.
P=0.900.050.000.080.850.000.020.101.00
This is a special kind of matrix called a **stochastic matrix**. Its two key properties are:
All entries are non-negative.
The sum of each row is exactly 1.
Now, let's represent the current state of our entire market with a **state vector `v`**. Let's say today, 80% of companies are IG, 15% are HY, and 5% have Defaulted.
v0=[0.80,0.15,0.05]
Part 3: Predicting the Future - Matrix-Vector Multiplication
To find the state one year from now, we multiply our current state vector by our transition matrix: `v₁ = v₀ * P`.
Let's calculate the first component of `v₁` (the new % of IG firms):
This calculation tells a story: 90% of the original 80% of IG firms *stayed* IG (0.72), and 5% of the original 15% of HY firms were *upgraded* to IG (0.0075).
When we complete the full matrix-vector multiplication, we get our new state vector:
v1=[0.7275,0.1915,0.0810]
After one year, we predict that 72.75% of firms will be IG, 19.15% will be HY, and 8.1% will be Defaulted.
Part 4: The Long-Term Future - Matrix Powers
What about two years? `v₂ = v₁ * P = (v₀ * P) * P = v₀ * P²`. The state after `n` steps is:
vn=v0Pn
The problem of predicting the long-term future has become the problem of calculating matrix powers.
Part 5: The Steady State - The Eigenvector Connection
For many transition matrices, the system will converge to a **steady-state vector `v_ss`** such that it never leaves:
vssP=vss
This is the eigenvector equation `Ax = λx`! The steady-state vector is the **left eigenvector** of `P` corresponding to an **eigenvalue of λ = 1**.
For any regular stochastic matrix, a unique eigenvalue of 1 is guaranteed, and its corresponding eigenvector (when normalized) is the long-term equilibrium state of the system.