6

Is there a canonical continuous probability distribution, the center of which is best characterized with the harmonic mean, given by $$ \mathrm{HM}(X) = n \cdot \left( \sum\limits_{k=1}^{n} x_k^{-1} \right)^{-1}? $$

With "canonical" I mean: similar to the way we typically associate the normal distribution with the arithmetic mean, $$ \mathrm{AM}(X) = \sum\limits_{k=1}^{n} \dfrac{x_k}{n}, $$ or the log-normal distribution with the geometric mean, $$ \mathrm{GM}(X) = \prod\limits_{k=1}^{n} x_k^{\frac{1}{n}}. $$


Update: Because zero is not invertible, I guess that the support of such a distribution would have to be a subset of $\mathbb{R} \setminus \{ 0 \}$, maybe $\mathbb{R}_{>0}$.

2 Answers2

4

The inverse normal distribution does not furnish a suitable example, because if $Y = Z^{-1}$ where $Z \sim \operatorname{Normal}(0,1)$, then $\operatorname{E}[Y]$ is indeterminate. We can, however, consider a double-inverse gamma distribution: define $$f_X(x) = \frac{|x|}{2}e^{-|x|}, \quad -\infty < x < \infty.$$ It is trivial to see that this function indeed defines a density. Now let $Y = X^{-1}$, from which we find that the density of $Y$ is $$f_Y(y) = f_X(y^{-1})y^{-2} = \frac{1}{2y^2|y|} e^{-1/|y|}, \quad y \ne 0.$$ This function does have a well-defined expectation since $$\int_{y=0}^\infty yf_Y(y) \, dy = \frac{1}{2}.$$ Then, due to $f_Y$ being an even function, we trivially find $\operatorname{E}[Y] = 0$.

Now, whether the harmonic mean of an IID sample drawn from $Y$ is in some sense the "best" estimator of the population mean because $\bar x$ is the "best" estimator of the mean of $X$ and $Y = 1/X$, I am not so sure. This is because we can say that the estimator $\tilde y = n (\sum_{i=1}^n y_i^{-1})^{-1}$ has expectation $$\operatorname{E}[\tilde y] = n \operatorname{E}\left[\left(\sum_{i=1}^n y_i^{-1}\right)^{-1}\right],$$ but it cannot be said that the RHS is in general equal to $$n \left(\operatorname{E}\left[\sum_{i=1}^n y_i^{-1}\right]\right)^{-1},$$ in as much as we cannot generally write $$\operatorname{E}[g(X)] = g(\operatorname{E}[X]):$$ that is, the expectation of a function of a random variable does not generally equal the function evaluated at the variable's expected value. If you could say that, then the expectation passes through the sum via linearity and you'd get $$n \left(\sum_{i=1}^n \operatorname{E}[y_i^{-1}]\right)^{-1} = n \left(n \operatorname{E}[X]\right)^{-1} = \operatorname{E}[X]^{-1}.$$ And again, you run into the same problem/fallacy: you can't claim that this last expression equals $\operatorname{E}[Y]$. Thus, the idea to consider inverse distributions seems dubious to me.

heropup
  • 143,828
-1

It's quite unclear what you mean by "canonical" and "associated with" since

  1. $AM(F_n)\overset{a.s.}\longrightarrow E(F)$ for any $F$ (if $E(F)$ exists) - not just normal $F$.

  2. $GM(F_n)\overset{a.s.}\longrightarrow median(F)$ for any $F=\exp(S)$ if $E(S)=median(S)$ - not just normal $S$.

  3. Pending that clarification, $HM(F_n)\overset{a.s.}\longrightarrow mode(F)$ for $$F\sim\Gamma(\alpha,\beta)$$ for $\alpha>1$, since $mode(\Gamma(\alpha,\beta))=\frac{\alpha-1}{\beta}=(E(Inv\Gamma(\alpha,\beta))^{-1}$.

  4. In fact, the above convergence holds for any distribution $F$ s.t. $$\int_R \frac {dF(u)}{u}=\frac 1 {mode(F)}$$ These include all distributions with $[0,\infty)$ support that have a mode at $0$. If not, since HM is skewed towards small values of the distribution, it picks up the mode if $f(0)=0$, $f$ rises to its peak and then falls off slower than it rose up.

  5. A discrete example on $\{a,b,c\}$: Pick any positive values $a$, $c$ and probabilities $p_a$, $p_c$ s.t. $p_b=1-p_a-p_c>\max\{p_a,p_c\}$ and set $b^{-1}=\frac{p_a}{p_a+p_c}a^{-1}+\frac {p_c}{p_a+p_c}c^{-1}$ to get $HM(F_n)\to b$. Set $p_a=p_c<1/3$, to get $b=HM(a,c)$.
A.S.
  • 4,024