Lesson 2.1: Autoregressive (AR) Models
Modeling how past values of a series influence its present value.
Part 1: The Core Idea - Regressing on the Past
An Autoregressive (AR) model is a regression of a time series on its own past values, called **lags**. It formalizes the idea of "memory" or momentum.
The Core Analogy: Driving by Looking in the Rear-View Mirror
An AR(1) model is like saying, "My speed right now () is some fraction () of my speed one second ago (), plus a random jolt ()." The order `p` in AR(p) tells you how many lags are included.
Part 2: The AR(p) Model Specification
The AR(p) Model
The value of the series is a linear function of its own past values.
- are the autoregressive coefficients.
- is a white noise error term.
Part 3: Model Identification
The Signature of an AR(p) Process
- The **ACF plot** will show a pattern of **gradual decay**.
- The **PACF plot** will **cut off sharply** after lag .
The PACF plot is the primary tool for identifying the order `p` of an AR model.
What's Next? Modeling Shocks
AR models capture the memory of past *values*. But what about the memory of past *shocks* or forecast errors?
In the next lesson, we will explore the complementary **Moving Average (MA) Model**, which models the present value as a function of past error terms.