1

It is well known that, let's focus on $\mathbb{R}^1$ for now, the normal distribution $N(\mu, \sigma^2)$ has the following property:

If $X\sim \mathcal N(\mu_1, \sigma_1^2)$, $Y \sim \mathcal N (\mu_2, \sigma_2^2)$ are two independent normal variables, then $$ aX+bY \sim \mathcal N(a\mu_1 + b\mu_2, a^2\sigma_1^2 + b^2\sigma_2^2). $$ which is also a normal distribution.

So my question is: Is the inverse true? If a family probability distribution on $\mathbb{R}^1$ is

  • parameterized by its expectation $\mu$ and variance $\sigma^2$, i.e. for each $\mu \in \mathbb{R}$ and $\sigma^2 \in \mathbb{R}^{+}$, then there is exactly one distribution in this family with expectation $\mu$ and variance $\sigma^2$
  • closed under independent addition, i.e. if both $X$, $Y$'s distributions are in this family, and $X, Y$ are independent, then $X+Y$ is in this family
  • closed under scalar multiplication, i.e. if $X$ is in this family, then for $a \in \mathbb{R}\backslash\{0\}$, then $a X$ is also in this family

then can we claim that this family of distributions is exactly the family of all normal distributions on $\mathbb{R}^1$?

(Maybe we will have to generalize the normal distribution to include the case when $\sigma^2 = 0$, i.e. a distribution with probability 1 to be $\mu$, or equivalently, we can say it is a "delta distribution at $\mu$")

Mr. Egg
  • 754
  • As a comment regarding the "extension," one almost always includes Dirac measures as normal random variables in studies of the structure of Gaussian distributions. This is needed to have complete spaces of random variables when you work with things like Gaussian Hilbert spaces. – Chris Janjigian Aug 23 '24 at 01:36

1 Answers1

3

The central limit theorem does the job. (At least, if we include the variance-$0$ distributions in there, allowing us to add a constant.)

Let $X_1, X_2, \dots, X_n$ be $n$ independent copies of the distribution from this family with mean $0$ and variance $1$. Then the expression $$Z_n = \frac{X_1 + X_2 + \dots + X_n}{\sqrt n}$$ also has mean $0$ and variance $1$, so by assumption, it has the same distribution (for all $n$). By the central limit theorem, however, it converges to the standard normal distribution as $n \to \infty$.

The only way that the sequence $Z_1, Z_2, Z_3, \dots$ of identically distributed variables can converge to the standard normal distribution is if their distribution is already the standard normal. From there, scaling by $\sigma$ and shifting by $\mu$ proves that any other distribution in the family is also normal.

It's inelegant to have to include the $\sigma=0$ case just to be able to shift over the $Z$'s at the end. Maybe we can argue as follows:

  • Let $X$ be a random variable from this family with mean $\mu$ and variance $\epsilon$.
  • Let $Y$ be a random variable from this family with mean $0$ and variance $\sigma-\epsilon$; by the argument above, $Y$ is normally distributed.

Then if $X$ and $Y$ are independent, $X+Y$ has the distribution from this family with mean $\mu$ and variance $\sigma$. However, as $\epsilon \to 0$, the CDF of $X+Y$ should approach the CDF of $\mu+Y$ (using, for example, Chebyshev's inequality to say that $X$ cannot be too far from $\mu$). This means that $X+Y$ should also be normal.

Misha Lavrov
  • 159,700
  • 1
    (Don't forget to center the $X_i$!) – Ziv Aug 23 '24 at 01:26
  • 1
    @Ziv You're right! I made a last-minute change in the hopes of avoiding the variance-$0$ distributions, but it doesn't look like that works out. (I've changed the argument to only deal with the standard normal case, but centering the $X_i$ would also work and also requires being able to add a constant.) – Misha Lavrov Aug 23 '24 at 01:30
  • Doesn’t $Z_n$ have mean $\sqrt{n}\mu$? – A rural reader Aug 23 '24 at 01:30
  • 1
    @Aruralreader Not since my edit a minute ago it doesn't :) – Misha Lavrov Aug 23 '24 at 01:31
  • @MishaLavrov: Thanks, I’ll have to give this some thought. – A rural reader Aug 23 '24 at 01:49
  • 1
    @MishaLavrov I agree that it's nice to not need to assume there's a $\sigma = 0$ case of the family. But I think by your "$\varepsilon$ borrowed variance" trick, you can effectively show that if the family contains all the $\sigma > 0$ cases, then it also contains the $\sigma = 0$ case. – Ziv Aug 23 '24 at 01:52