14

Looking for a limiting value: $$\lim_{K\to \infty } \, -\frac{x \sum _{j=0}^K x (a+1)^{-3 j} \left(-(1-a)^{3 j-3 K}\right) \binom{K}{j} \exp \left(-\frac{1}{2} x^2 (a+1)^{-2 j} (1-a)^{2 j-2 K}\right)}{\sum _{j=0}^K (a+1)^{-j} (1-a)^{j-K} \binom{K}{j} \exp \left(-\frac{1}{2} x^2 (a+1)^{-2 j} (1-a)^{2 j-2 K}\right)}-1$$ with $a \in [0,1)$ and $x$ on the real line. The idea is to to take the second limit for $x \rightarrow \infty $. The aim is to compute the slope of the tail exponent of a distribution.

Making the variable continuous (binomial as ratio of gamma functions) makes it equivalent to: $$\lim_{N\to \infty } 1-\frac{x (1-a)^N \displaystyle\int_0^N -\frac{x (a+1)^{-3 y}\Gamma (N+1) (1-a)^{3 (y-N)} \exp \left(-\frac{1}{2} x^2 (a+1)^{-2 y} (1-a)^{2 y-2 N}\right)}{\Gamma (y+1) \Gamma (N-y+1)} \, \mathrm{d}y}{\sqrt{2} \displaystyle\int_0^N \frac{\left(\frac{2}{a+1}-1\right)^y \Gamma (N+1) \exp \left(-\frac{1}{2} x^2 (a+1)^{-2 y} (1-a)^{2 y-2 N}\right)}{\sqrt{2} \, \Gamma (y+1) \Gamma (N-y+1)} \, \mathrm{d}y}$$ Thank you in advance.

BCLC
  • 14,197
Nero
  • 3,779

1 Answers1

3

The answer for the continuous version when the binomials are replaced with the $\Gamma$ function is $$\alpha = \frac{\log\left(-\frac{\log(1+a)}{\log(1-a)}\right)}{\log \frac{1-a}{1+a}}$$ and when the binomials are replaced with a normal distribution of mean $K/2$ and variance $K/4$. $$\alpha = -2\frac{\log \left(1-a^2\right)}{\log ^2\left(\frac{1-a}{a+1}\right)}$$

The discrete version does not converge. Consider $a=\phi^{-1}$ (the golden ratio conjugate) and $x=(2\phi+1)^n$ for some $n \in \mathbb{N}$, the variance will be equal to $x$ for $j=n+2K/3$. The tail exponent oscillates with a period of 3 and does not converge as $K\rightarrow \infty$

However, for all intent and purposes, the first $\alpha$ given is a good description of the asymptotic behavior of the tail, the tail just isn't smooth enough to see it at the infinitesimal scale.

Arthur B.
  • 972
  • 1
    Can you walks us through the steps for f'/f ? – Nero Oct 30 '13 at 20:56
  • I'm not sure what you mean. There are three things I did in this post basically: reparametrize with p=(a+1)/2, suggest a continuous approximation, reported some empirical results by taking the limit over sequences of x_k – Arthur B. Oct 31 '13 at 13:24
  • 1
    Doing the continuous approximation and change of variable $a=2 p -1$ (and using N instead of K), I get $$-\frac{x \int_0^N -\frac{2^{-4 N-\frac{1}{2}} x p^{-3 j}, \Gamma (N+1) (1-p)^{3 (j-N)} \exp \left(-2^{-2 N-1} x^2 p^{-2 j} (1-p)^{2 j-2 N}\right)}{\sqrt{\pi } \Gamma (j+1) \Gamma (-j+N+1)} , dj}{\int_0^N \frac{\left(\frac{1}{p}-1\right)^j (4-4 p)^{-N} \Gamma (N+1) \exp \left(-2^{-2 N-1} x^2 p^{-2 j} (1-p)^{2 j-2 N}, \right)}{\sqrt{2 \pi } \Gamma (j+1) \Gamma (-j+N+1)} , dj}-1$$ – Nero Oct 31 '13 at 18:58
  • do a change of variable y = Kj, and get rid of the constant factors in the numerator and the denominator. Also multiply both numerators and denominators by (a+1)^K(a-1)^K – Arthur B. Oct 31 '13 at 19:14
  • I switched a $j$ and $K-j$ by mistake, I'm explaining the steps and fixing the error – Arthur B. Oct 31 '13 at 19:39
  • 1
    I went the other route, to consider the Binomial sum as a ratio of gamma functions $$\binom{N}{j}=\frac{\Gamma (N+1)}{\Gamma (j+1) \Gamma (-j+N+1)}$$ and multiply the integral by $2^N$. – Nero Oct 31 '13 at 19:40
  • That seems less likely to yield an analytic solution. The error made by approximating the binomial law with a normal law seems like it could be bounded without too much work. But it's hard to progress past this because the y gets stuck inside the variance. What's the motivation for the problem? – Arthur B. Oct 31 '13 at 20:34
  • 1
    Looking at solution will try numerically then post the context/motivation when I have web connection. – Nero Nov 02 '13 at 19:58
  • Numerically, I tend to find 1/2 instead, with convergence becoming very poor after a>0.4 The answer given is for the continuous approximation, which may be inappropriate. – Arthur B. Nov 04 '13 at 15:44
  • 1
    A 3/2 convergence makes sense, with the 1< tail <2 so it could be an error. The background is in chapter 8 p 97. https://docs.google.com/file/d/0B_31K_MP92hUVjNBUFB5VDZOMDg/edit?usp=sharing – Nero Nov 04 '13 at 17:10
  • For two levels, you're looking at $\frac{1}{4}(1+a(1+a))+\frac{1}{4}(1+a(1-a))+\frac{1}{4}(1-a(1+a))+\frac{1}{4}(1-a(1-a))$, but $1+a(1+a)$ is not the same as $(1+a)(1+a)$ and thus the binomial formula doesn't represent what you're after. You want $2^{-K}(1\pm a \pm a^2 \ldots \pm a^K)$ – Arthur B. Nov 05 '13 at 04:27
  • But then, the variances are going to be contained in the interval $[1-\frac{a}{1-a},1+\frac{a}{1-a}]$ and there won't be any fat tails. – Arthur B. Nov 05 '13 at 04:55
  • 1
    No, your answer about (a+1) is not correct. – Nero Nov 05 '13 at 19:44
  • $1+a(1+a) \neq (1+a)(1+a)$, am I missing something? – Arthur B. Nov 05 '13 at 20:26
  • 1
    I am using the latter in the derivations. – Nero Nov 06 '13 at 14:31
  • The last formula of page 98 is incorrect. Just take $N=2$... $(1+a(1)(1+a(2))) \neq (1+a(1))(1+a(2))$. Or simply consider that if your uncertainty is $5%$, the largest error you can get is $.05 \times (1 + .05 \ldots )) \sim 5.263% \ldots$ – Arthur B. Nov 06 '13 at 14:57
  • 1
    ignore that one parentheses messed up in Latex translation. – Nero Nov 06 '13 at 19:20
  • I think you really do mean $1+a(1+a)$. For instance, you say: "Thus in place of $a(1)$ we have $\frac{1}{2} a(1)( 1\pm a(2))$". You also refer to it as "uncertainty about the error rate $a(1)$". Now if you insist that what you care about is the binomial, then the answer is as above: no convergence in the discrete case, convergence in the continuous case with an analytical answer. – Arthur B. Nov 06 '13 at 20:49
  • 1
    Fixed it... this is for "other regime" of non-multiplicative probability. – Nero Nov 08 '13 at 13:34
  • Ok, but it still doesn't converge :) You keep getting ripples in the tail that make the local derivative meaningless. – Arthur B. Nov 08 '13 at 15:31
  • The continuous version converges though, multiply $x$ by $\lambda = \left(\frac{1+a}{1-a}\right)^{\epsilon}$, you can then interpret that as shifting the integrand by $\epsilon$ and multiplying by $\lambda$. Since all of the mass is going to be in a very small region where the std is close to $x$, the continuous binomial coefficients are locally exponential and you can undo the shift by multiplying by $\left(\frac{\log (1-a)}{\log \frac{1-a}{a+1}}\right)^{\epsilon}$. This gives you the $\alpha$ in the answer. – Arthur B. Nov 08 '13 at 15:40
  • The answer is valid in a sense for the continuous version as well, it's just that the tail is wavy and the phase varies with K, so a local derivative isn't meaningful. But for practical purposes, if you squint, the answer's the same. – Arthur B. Nov 08 '13 at 15:43