Lesson 2.3: ARMA Models

Combining AR and MA models to capture complex dynamics.

Part 1: The Core Idea - A More Complete Memory

Real-world processes are rarely pure AR or pure MA. An Autoregressive Moving Average (ARMA) model provides a parsimonious way to model a series that has both types of memory.

The Core Analogy: A Smart Thermostat

An ARMA model is like a smart thermostat that uses both the history of the room's temperature (the AR part) and the history of its own past forecast errors (the MA part, e.g., remembering someone opened a window) to make a more intelligent prediction.

Part 2: The ARMA(p,q) Model Specification

The ARMA(p,q) Model

An ARMA(p,q) model combines an AR(p) component and an MA(q) component.

Yt=c+ϕ1Yt1+AR(p)+θ1ϵt1++ϵtMA(q)Y_t = c + \underbrace{\phi_1 Y_{t-1} + \dots}_{\text{AR(p)}} + \underbrace{\theta_1 \epsilon_{t-1} + \dots + \epsilon_t}_{\text{MA(q)}}

Part 3: Model Identification

The Signature of an ARMA(p,q) Process

A mixed ARMA process has an ambiguous signature:

  • The **ACF plot** will **decay gradually**.
  • The **PACF plot** will also **decay gradually**.
Solving Ambiguity: Information Criteria (AIC & BIC)

When visual inspection isn't enough, we fit multiple candidate models (e.g., ARMA(1,1), ARMA(2,1)) and choose the one with the **lowest AIC or BIC value**.

What's Next? The Final Piece

We have a powerful toolkit for modeling stationary time series. But what about non-stationary data like stock prices?

In the next lesson, we will learn how to handle non-stationary data by introducing **differencing**, completing our journey to the **ARIMA Model**.