0

Suppose I have a standard Brownian bridge $B(t)=(W(t)|W(0)=W(1)=0)$.

Suppose further there is a finite set $S \subset [0,1]$ where we denote $s$ a generic element of $S$. Each $s$ is associated with a known $a_s \in \mathbb{R}$ and $b_s \in \mathbb{R}$ such that $B(s) \in [a_s,b_s]$. That is, the bridge passes through some defined "gates" at some points.

I wonder how I can determine the conditional distribution of $B(t)$ given this information.

There is a brute force way, by first computing the distribution of all possible Brownian bridges using the standard formulas and then truncating this distribution point-by-point, excluding all Brownian paths that do not pass through the gates.

However, given how simple everything becomes once $a_s=b_s$, it feels uneconomical and messy to do this, and I suspect there should be something very straightforward to compute this. Unfortunately, my attempts to check the literature returned nothing useful. So perhaps, I am missing something basic, and everything is super simple, and thus no one bothered to write it down. Or there is a term for this type of process that I am unaware of.

Can anyone help me out?

johaschn
  • 143
  • 1
    I dont understand $a$. What does $a(s)$ mean(with a set $s$)? – NN2 Nov 17 '22 at 09:59
  • 1
    And if $s$ is a set, how do you define $B(s)$? – NN2 Nov 17 '22 at 10:02
  • 1
    Yes, bad notation. Sorry. I tried to fix it. So $S$ is a set, $s$ is an element of the set. And each element has two numbers associated, $a_s$ and $b_s$ are numbers (potentially depending on $s$) so that we know that $B(s) \in [a_s,b_s]$. – johaschn Nov 17 '22 at 10:05

1 Answers1

1

First, we remark that the two events belows contain the same information and they are equal

  • $\{B_s \in[a_s,b_s]$ for all $s\in S$}
  • $\{W_s \in[a_s,b_s]$ for all $s\in S$ and $W_1 = 0\}$

Let $\mathcal{E}$ denote this information, we have $$B_t|\mathcal E = (W_t|\{W_1 = 0\})|\mathcal{E} = W_t|(\{W_1 = 0\})\cap \mathcal{E}) = W_t|\mathcal{E}$$

Then, the conditional distribution of $B_t$ given the information $\mathcal{E}$ is the conditional distribution of $W_t$ given $\mathcal{E}$.


We determine now the distribution of $W_t|\mathcal{E}$.

For the sake of simplicity, we assume that $S$ is an open set and contains also the two points $0$ and $1$( so, we have $a_0=b_0=a_1=b_1=0$). Hence, we can write $$\mathcal{E} = \{W_s \in[a_s,b_s], \forall s\in S\} $$

As the Brownian motion $W_t$ is markovian, for all $t \not \in S$, we have $$W_t|\mathcal{E} = W_t|(\{W_x \in[a_x,b_x]\}\cap \{W_y \in[a_y,b_y]\})\tag{1}$$ with

  • $x,y \in S$ and defined by $$x =\text{argmax}(s|s\in S, s<t )$$ $$y =\text{argmin}(s|s\in S, s>t )$$ in order words, $[x,y]$ is the smallest interval that contains $t$.
  • $\sigma(B_s: s\in S)$: the sigma algebra generated by the set of 2 variables $W_x$ and $W_y$

because information before the time $x$ is all captured by $W_x$ and information after the time $y$ is all captured by $W_y$.

According to this result, we have $$(W_t\mid W_x = z, W_y = w)\sim \mathcal{N}\left(\frac{y-t}{y-x}z+\frac{t-x}{y-x}w,\frac{(y-t)(t-x)}{y-x}\right) \tag{2}$$ Let denote $\varphi(u;x,t,y,z,w)$ the density function of $(W_t\mid W_x = z, W_y = w)$, then $$\color{red}{\varphi(u;x,t,y,z,w) = \frac{1}{\sqrt{2\pi}}e^{-\frac{(u-m)^2}{\sigma^2}}} \tag{3}$$ with $$(m,\sigma) = \left(\frac{y-t}{y-x}z+\frac{t-x}{y-x}w,\sqrt{\frac{(y-t)(t-x)}{y-x}} \right)$$ Remark 1: just for information, $(2)$ is equivalent to $$(W_t\mid W_x, W_y) = \frac{y-t}{y-x}W_x+\frac{t-x}{y-x}W_y+Z$$ with $Z = \left(W_t - \frac{y-t}{y-x}W_x+\frac{t-x}{y-x}W_y \right)$ independent to $W_x$ and $W_y$, $Z$ follows the normal distribution $\mathcal{N} \left(0,\frac{(y-t)(t-x)}{y-x} \right)$.


Let $p(u)$ the density function of $W_t|\mathcal{E}$, applying the Bayes' theorem, we have $$\begin{align} p(u) &= \mathbb{P}(W_t = u|(\{W_x \in[a_x,b_x]\}\cap \{W_y \in[a_y,b_y]\})) \\ &=\frac{\mathbb{P}(\{W_t = u\} \cap (\{W_x \in[a_x,b_x]\}\cap \{W_y \in[a_y,b_y]\}))}{\mathbb{P}(\{W_x \in[a_x,b_x]\}\cap \{W_y \in[a_y,b_y]\})} \tag{4} \end{align}$$

We remind that $(W_x,W_y)$ follows the 2-dimensional gaussian $\mathcal{N}_2 (\mathbf{0}, \mathbf{\Sigma})$ of mean $\mathbf{0}$ and covariance matrix $\mathbf{\Sigma}$. $$ \mathbf{\Sigma}= \begin{pmatrix} x & x \\ x & y \end{pmatrix} $$ Then the denominator of $(4)$ can be computed easily and is equal to $\color{red}{\Phi_2 \left(\begin{pmatrix} a_x \\ a_y \end{pmatrix} ,\begin{pmatrix} b_x \\ b_y \end{pmatrix} ; \Sigma\right)} $

For the next step, we denote also $\color{red}{\varphi_2(z,w; x,t,y)}$ the density function of the 2-dimensional gaussian $(W_x,W_y)$.

The nominator of $(4)$ is equal to:

$$\begin{align} \text{N} &:=\mathbb{P}(\{W_t = u\} \cap (\{W_x \in[a_x,b_x]\}\cap \{W_y \in[a_y,b_y]\}))\\ &=\iint_{\left\{\begin{array}{rcr} z\in[a_x,b_x] \\ w\in[a_y,b_y] \end{array} \right\}}\mathbb{P}(\{W_t = u\} \cap \{W_x =z\}\cap \{W_y =w\})dzdw\\ &=\iint_{\left\{\begin{array}{rcr} z\in[a_x,b_x] \\ w\in[a_y,b_y] \end{array} \right\}}\underbrace{\mathbb{P}(\{W_t = u\} \mid \{W_x =z\}\cap \{W_y =w\})}_{\text{we will use }(3)}\cdot \underbrace{\mathbb{P}( \{W_x =z\}\cap \{W_y =w\}) }_{ =\varphi_2(z,w;x,t,y) }dzdw\\ &=\iint_{\left\{\begin{array}{rcr} z\in[a_x,b_x] \\ w\in[a_y,b_y] \end{array} \right\}}\varphi(u;x,t,y,z,w) \cdot\varphi_2(z,w; x,t,y)dzdw\\ \end{align}$$


Thus, the density function of $W_t|\mathcal{E}$ is equal to $$\color{red}{p(u) = \frac{\iint_{\left\{\begin{array}{rcr} z\in[a_x,b_x] \\ w\in[a_y,b_y] \end{array} \right\}}\varphi(u;x,t,y,z,w) \cdot\varphi_2(z,w; x,t,y)dzdw}{\Phi_2 \left(\begin{pmatrix} a_x \\ a_y \end{pmatrix} ,\begin{pmatrix} b_x \\ b_y \end{pmatrix} ; \Sigma\right)}} \tag{5}$$

Remark 2: In general, $(5)$ cannot computed analytically but it can be computed numerically easily as the nominator and the denominator are both double integral.

NN2
  • 20,162
  • Wow! Thanks, this looks amazing! I'll work through it in the coming days and see if I can follow everything. – johaschn Nov 18 '22 at 21:05
  • @johaschn You are welcome! – NN2 Nov 18 '22 at 22:38
  • Just to be sure I am not missing something: If $a_x=b_x=a_y=b_y=0$ we get that $u \sim \mathcal N(0,t)$.

    However, equation (5) assumes that for both intervals $a_x<b_x$ and $a_y<b_y$. So to get the special case in which one (or both) of the gates are singletons we need to take the simpler uni-dimensional objects to get a solution (or work with limits)?

    – johaschn Nov 21 '22 at 13:21
  • @johaschn this occurs only for the cases $t=0$ or $t=1$. In the two cases, the conditional variable (example, $W_0|\mathcal{E} $) is equal to $0$ by definition. – NN2 Nov 21 '22 at 13:38
  • I must be missing something, then. If, say, only for $t=0.5$, I know (apart from the endpoints $t=0,t=1$) that $a_{0.5}=b_{0.5}=1$, I would be in the standard Brownian bridge environment, right?

    I get that for t=1/4, u is distributed normally with $\mu=1/4$ and $\sigma=1/8$.

    Punching these into (5) yields no result because the integrals run over an area with no (ex ante) mass (so it's 0/0).

    Similarly, if I know apart from the bounds only $[a_{0.5},b_{0.5}]=[0,1$]. Then, again, I run into trouble in eq (5) because either $w$ or $z$ are 0-probability realizations leading again to 0/0.

    – johaschn Nov 21 '22 at 14:03
  • Just to be sure: This is no complaint, I just try to understand how your solution behaves in the special cases – johaschn Nov 21 '22 at 14:05
  • 1
    @johaschn I see. First, the solution is applied for $t\not \in S$ (for $t\in S$, you observe already $B_t$, so the distribution, or the value is known). Second, when you wrote the case $t=0.5$, I guess you mean $y=0.5$ in formula (1)(and so $W_y=1$), indeed, if $W_x$ or $W_y$ (or both) is constant, the problem turns to be simpler. The formula (5) degenerates to a single integral (or even a closed form expression when both $W_x, W_y$ are known). If necessary, I can add these formula when I have my computer at home (I use currently my smartphone, which is not practice for writing formulas) – NN2 Nov 21 '22 at 14:27
  • 1
    All good, no need. I understand what happens in those cases. I was just wondering whether I was doing the right thing here by playing around with the limits to check if I got everything. – johaschn Nov 21 '22 at 14:30