Lesson 1.11: Masterclass on Continuous MGFs
This lesson completes our single-variable toolkit by applying the Moment Generating Function (MGF) to the continuous domain. We provide full analytical derivations for the MGFs of the Uniform, Exponential, and the all-important Normal distribution. We will then use these 'fingerprints' to generate their key moments, laying the rigorous mathematical bedrock for the Central Limit Theorem and advanced risk modeling.
Part 1: Upgrading the MGF for the Continuous World
The definition of the MGF is universal: . The only thing that changes is the tool we use to calculate the expectation. We swap summation for integration.
Definition: MGF for Continuous R.V.s
For a continuous random variable with PDF , the MGF is:
The Moment Generation Rule
The magic of the MGF is that once the (often difficult) integration is done, finding moments becomes a simple act of differentiation. The core property is unchanged:
Part 2: Rigorous Derivations of Foundational MGFs
Uniform MGF Derivation
Uniform Moment Derivations
Finding derivatives requires L'Hopital's rule. It's complex, so we often state the moments which were derived more easily in Lesson 1.9.
Mean:
Variance:
Skewness: (The distribution is perfectly symmetric).
Excess Kurtosis: (It has "thinner" tails than a Normal distribution).
Exponential MGF Derivation
For this integral to converge, the exponent must be negative, so we require .
Exponential Moment Derivations
Let's find the first two moments by differentiation. We'll write .
Mean:
Variance:
Normal MGF Derivation (No-Skip Masterclass)
We start with the MGF of a standard normal . Its PDF is .
Step 1: Combine the exponents.
Step 2: The "Completing the Square" Trick. We want to make the exponent look like . To do this, we add and subtract inside the parenthesis.
Step 3: Separate the exponents and factor out the constant. The term is a constant with respect to .
Step 4: Recognize the remaining integral. The entire integral that remains is the PDF of a Normal distribution with mean and variance 1. The total area under any PDF is exactly 1.
Step 5: Generalize to . We use the property and .
Normal Moment Derivations
Let .
Mean:
Variance:
Skewness: The distribution is perfectly symmetric, so its standardized skewness is **0**.
Kurtosis: The fourth central moment is . This gives a kurtosis of 3, and an **excess kurtosis of 0**. This is the benchmark against which all other distributions' "tail-heaviness" is measured.
You might wonder about distributions like the Student's t or Lognormal. A fascinating and critical feature of these distributions is that their MGFs **do not exist** in the traditional sense, because the defining integral diverges (goes to infinity). This mathematical fact is the very reason they are useful for modeling extreme financial risk—their "moments" can be infinite, which is the definition of a "fat tail." We will explore these in Module 2.
What's Next? From One to Many
We have reached the summit of single-variable probability theory. We have a complete toolbox of discrete and continuous distributions and a master tool (the MGF) for analyzing them with calculus.
But the real world is not about one variable; it's about how multiple variables interact. How does one stock's return relate to another's? This is the domain of multi-variable probability, and it's the gateway to understanding correlation, covariance, and regression.