Logistic Distribution

A key distribution in machine learning and growth modeling.

The "Growth Curve" Distribution

The Logistic distribution is a continuous probability distribution whose cumulative distribution function is the logistic function, which appears in logistic regression and feedforward neural networks. It resembles the normal distribution but has heavier tails, meaning it gives more probability to extreme events.

In finance, it's used in credit risk modeling to estimate the probability of default. Its S-shaped cumulative distribution function is perfect for modeling phenomena that have a "saturation" point, like the adoption rate of a new technology or the market share of a product.

Interactive Logistic Distribution
Adjust the location (μ) and scale (s) parameters to see how the shape of the distribution changes.
Mean (μ\mu): 0.00
Variance (σ2\sigma^2): 3.29

Core Concepts

Probability Density Function (PDF)
f(x;μ,s)=e(xμ)/ss(1+e(xμ)/s)2f(x; \mu, s) = \frac{e^{-(x-\mu)/s}}{s(1+e^{-(x-\mu)/s})^2}
  • μ\mu (mu) is the location parameter, which is also the mean, median, and mode.
  • s>0s > 0 is the scale parameter, which is proportional to the standard deviation.
Expected Value & Variance

Expected Value (Mean)

E[X]=μE[X] = \mu

Variance

Var(X)=s2π23Var(X) = \frac{s^2 \pi^2}{3}

Key Derivations

Deriving the Mean & Variance
The derivations for the moments of the Logistic distribution are most clearly shown using the moment-generating function (MGF).

Deriving the Expected Value (Mean)

Step 1: The Moment-Generating Function (MGF)

For a standard logistic distribution (μ=0,s=1\mu=0, s=1), the MGF is known to be:

MX(t)=E[etX]=πtcsc(πt)=πtsin(πt)M_X(t) = E[e^{tX}] = \pi t \csc(\pi t) = \frac{\pi t}{\sin(\pi t)}

The mean is the first derivative of the MGF, evaluated at t=0.

Step 2: Calculate the First Derivative

We need to find MX(0)M_X'(0). Using L'Hôpital's rule is easiest. The derivative of the MGF is complex, but its limit as t0t \to 0 is 0.

E[X]=MX(0)=0E[X] = M_X'(0) = 0

For a general logistic distribution Y=μ+sXY = \mu + sX, the mean is:

E[Y]=E[μ+sX]=μ+sE[X]=μ+s(0)=μE[Y] = E[\mu + sX] = \mu + sE[X] = \mu + s(0) = \mu
Final Mean Formula
E[X]=μE[X] = \mu

Deriving the Variance

We use Var(X)=E[X2](E[X])2Var(X) = E[X^2] - (E[X])^2. We need the second moment, E[X2]E[X^2].

Step 1: Calculate the Second Derivative of the MGF

The second moment E[X2]E[X^2] for the standard distribution is the second derivative of the MGF, evaluated at t=0.

E[X2]=MX(0)E[X^2] = M_X''(0)

This is a more complex derivative, which can be shown to evaluate to:

MX(0)=π23M_X''(0) = \frac{\pi^2}{3}

So, for the standard distribution, Var(X)=E[X2](E[X])2=π2302=π23Var(X) = E[X^2] - (E[X])^2 = \frac{\pi^2}{3} - 0^2 = \frac{\pi^2}{3}.

Step 2: Scale for the General Distribution

For the general distribution Y=μ+sXY = \mu + sX, the variance is:

Var(Y)=Var(μ+sX)=Var(sX)=s2Var(X)Var(Y) = Var(\mu + sX) = Var(sX) = s^2 Var(X)
=s2π23= s^2 \frac{\pi^2}{3}
Final Variance Formula
Var(X)=s2π23Var(X) = \frac{s^2 \pi^2}{3}