1

Given a function:

$$f[x]=a\, \Phi \left[-x+\sigma \sqrt{\tau}\right]-\left(b+c\, e^{-d \tau}\right)\Phi \left[-x\right]$$

where $\Phi$ is the cumulative density function of the standard normal distribution: $$\Phi\left[z\right] = \frac{1}{\sqrt{2 \pi}}\int^{z}_{-\infty}e^{\frac{-u^2}{2}} \, \mathbb{d}u $$

...how can I find $x$ which which satisifies the conditon $f[x]=0$? Suppose that a, b, c, $\sigma$, t are known quantities.

I am stuck trying to use inverse identities since approximations to inverse functions seem to not hold when inverting a probability function multiplied by a constant.

Also, although there is an algorithm to find $x$ through recursion:

$$x \to-\Phi^{-1}\left[\frac{a}{b+c\, e^{-d \tau}}\, {\Phi\left[-x+\sigma \sqrt{\tau}\right]}\right] \,\,\, \forall \,\,\, \tau \in \, [t,T]$$

where $\Phi^{-1}$ is the probit function.

...this does not satisfy a closed form requirement.

Acceptable answers may include closed form solutions as well as numerical approximations provided that approximations converge $ \forall_{\left|x\right|\lt ~5} \in \mathbb{R}$. I also appreciate any direction or references.

prime
  • 191

1 Answers1

1

Define $u:=\sigma \sqrt{\tau}$ and $v:=b+c e^{-d\tau}$, so $f(x)=a\Phi(u-x)-v\Phi(-x)$. For $f$ to have a root, clearly $a$ and $v$ must have identical signs. I assume this in the following.

Given this, $f(x)=0$ implies $\log{|a|}+\log{\Phi(u-x)}=\log{|v|}+\log{\Phi(-x)}$, so if we additionally define $w=\log{|v|}-\log{|a|}$, then any root of $g$ defined by $$g(x):=\log{\Phi(u-x)}-\log{\Phi(-x)}-w$$ is also a root of $f$.

Now, if $u>0$, then $\log{\Phi(u-x)}>\log{\Phi(-x)}$, so there can only be a root if $w>0$. If $u=0$, then $g(x)=-w$ and there can only be a root if $w=0$ (in which case all points are roots). Finally, if $u<0$, then $\log{\Phi(u-x)}<\log{\Phi(-x)}$, so there can only be a root if $w<0$. Henceforth then, I will also assume that $\operatorname{sign}u=\operatorname{sign}w$.

In line with your desire for solutions which are accurate when $|x|$ is large, the natural next step is to compute asymptotic expansions of $g(x)$ around $x\rightarrow+\infty$ and $x\rightarrow-\infty$. I confess I did these in Maple to save time. (The Maple code is included at the bottom.) These series expansions depend on whether $u>0$ or $u<0$.

The expressions are particularly simple around $\infty$. In particular, as $x\rightarrow \infty$: $$g(x)=ux-\frac{u^2}{2}-w+O\left(\frac{1}{x}\right).$$ Thus, when the root $x$ of $f$ is large: $$x\approx \frac{u^2+2w}{2u},$$ i.e. this approximation is valid for large $u$, large $w$ or large $-w$ with small $-u$.

Approximating around the $-\infty$ produces a slightly messy expression, but its inverse is simpler. In particular, when the root $x$ of $f$ is negative and large in magnitude, and either $u>0$ or $u<0$ and small in magnitude: $$x\approx -\sqrt{\operatorname{LambertW}\left(\frac{1}{2w^2\pi}\right)}$$ Since the $\operatorname{LambertW}$ function (see Maple's definition here) tends to $\infty$ as its argument tends to $\pm\infty$, this approximation is valid for small $|w|$ and small $|u|$. ($|u|$ must also be small both because it was an additional assumption in deriving this expression in the $u<0$ case, and because for large $|u|$, the previous approximation is better.)

If $\operatorname{LambertW}$ is not sufficiently "closed-form", note that for small $|w|$, from the asymptotic approximation to the $\operatorname{LambertW}$ function given on Wikipedia here, we have that: $$x\approx-\sqrt{-\log{(2w^2\pi)}-\log{(-\log{(2w^2\pi)})}-\frac{\log{(-\log{(2w^2\pi)})}}{\log{(2w^2\pi)}}}.$$


Maple code:

phi:=x->1/sqrt(2*Pi)*exp(-x^2/2):
Phi:=z->int(phi(x),x=-infinity..z):
g:=x->log(Phi(u-x))-log(Phi(-x))-w:
"u>0, x>>0";
g(x):
simplify(series(%,x=infinity,3)) assuming u>0 and w>0 and x>0;
convert(%, polynom):
simplify(solve(%,x)) assuming u>0 and w>0;
"u<0, x>>0";
g(x):
simplify(series(%,x=infinity,3)) assuming u<0 and w<0 and x>0;
convert(%, polynom):
simplify(solve(%,x)) assuming u<0 and w<0;
"u>0, x<<0";
g(x):
simplify(series(%,x=-infinity,3)) assuming u>0 and w>0 and x<0;
convert(%, polynom):
simplify(solve(%,x)) assuming u>0 and w>0;
"u<0, x<<0";
g(x):
simplify(series(%,x=-infinity,3)) assuming u<0 and w<0 and x<0;
eval(%,u=0);
convert(%, polynom):
simplify(solve(%,x)) assuming u<0 and w<0;
cfp
  • 757
  • Thank you for your detailed reply. It will probably take me a few more days to get through it, but I will be sure to leave a response soon. – prime Aug 24 '17 at 02:14
  • With the handicap of not seeing the Maple result, not having Maple set up and not knowing how the asymptotics is derived by Maple, judging from my hand derived asymptotics, I think there should be some logarithmic terms rather than just polynomials including negative powers. I have not thought about to what extent the logarithms affects the error though. Do you have an error abound for your approximation? – Hans Aug 28 '17 at 01:07
  • In which of the four cases (signs of x and u) do you get logarithmic terms? I have not attempted to compute error bounds. – cfp Aug 28 '17 at 11:23
  • You can use integration by parts to derive series of polynomial (negative power included) bounds for Mill's ratio $\frac\Phi\phi$. Taking logarithm gives the logarithm terms. For more accurate sequential bounds in continued fraction, see https://www.stat.wisc.edu/courses/st771-newton/papers/Biometrika-1954-SHENTON-177-89.pdf. – Hans Aug 28 '17 at 18:36
  • Seems a useful reference indeed. – cfp Aug 29 '17 at 11:22