Lesson 2.1: Autoregressive (AR) Models

Modeling how past values of a series influence its present value.

Part 1: The Core Idea - Regressing on the Past

An Autoregressive (AR) model is a regression of a time series on its own past values, called **lags**. It formalizes the idea of "memory" or momentum.

The Core Analogy: Driving by Looking in the Rear-View Mirror

An AR(1) model is like saying, "My speed right now (YtY_t) is some fraction (ϕ1\phi_1) of my speed one second ago (Yt1Y_{t-1}), plus a random jolt (ϵt\epsilon_t)." The order `p` in AR(p) tells you how many lags are included.

Part 2: The AR(p) Model Specification

The AR(p) Model

The value of the series YtY_t is a linear function of its own pp past values.

Yt=c+ϕ1Yt1+ϕ2Yt2++ϕpYtp+ϵtY_t = c + \phi_1 Y_{t-1} + \phi_2 Y_{t-2} + \dots + \phi_p Y_{t-p} + \epsilon_t
  • ϕi\phi_i are the autoregressive coefficients.
  • ϵt\epsilon_t is a white noise error term.

Part 3: Model Identification

The Signature of an AR(p) Process

  • The **ACF plot** will show a pattern of **gradual decay**.
  • The **PACF plot** will **cut off sharply** after lag pp.

The PACF plot is the primary tool for identifying the order `p` of an AR model.

What's Next? Modeling Shocks

AR models capture the memory of past *values*. But what about the memory of past *shocks* or forecast errors?

In the next lesson, we will explore the complementary **Moving Average (MA) Model**, which models the present value as a function of past error terms.