5

Given a Wiener process X, how do I prove this?

$R_x(s,t) = E[X(s)X(t)] = min(s,t)$

There seems to be a trick with dividing to two cases of $s<t$ and $s>t$, but I can't figure out how this would be helpful.

This is what I've got so far:

$R_x(s,t) = E[X(s)X(t)] = E[X(s)(X(t)-X(s)+X(s))]$ $=E[X(s)^2]+E[X(s)(X(t)-X(s)]=E[X(s)^2]+E[(X(s)-X(0))(X(t)-X(s)]$ $=E[X(s)^2]+E[(X(s)-X(0))]E[(X(t)-X(s)]$ $=E[X(s)^2]+E[(X(s))]E[(X(t)-X(s)] = E[X(s)^2] = Var[X(s)] = s$

But the same thing could be done with t instead of s... So what am I doing wrong?

Ana M
  • 371
  • Hello and welcome to MSE! Be sure to provide thorough background or your own work to avoid having your questions prematurely closed! – Adam Hughes Aug 01 '14 at 02:55
  • @Ana Mzmz: You edited with the proof as I was posting -- the argument is symmetric with respect to $s$ and $t$. – RRL Aug 01 '14 at 03:13

2 Answers2

7

Brownian motion has independent increments and $X(t) \sim N(0,t)$.

If $t < s$ then

$$0 = E[X(t)(X(s)-X(t))] = E[X(t)X(s)]-E[X(t)^2]=E[X(t)X(s)]-t.$$

Hence,

$$E[X(t)X(s)] = t = \min(t,s)$$

Similarly if $s < t$, then $E[X(t)X(s)] = s = \min(t,s)$

AB_IM
  • 6,888
RRL
  • 92,835
  • 7
  • 70
  • 142
  • I still don't understand the importance of assuming $t<s$. What's wrong with the following, for example?

    Assume $s<t$. So $0=E[X(t)(X(s)−X(t))]=E[X(t)X(s)]−E[X(t))^2]$, hence $E[X(t)(X(s)]=t \neq min(t,s)$

    – Ana M Aug 01 '14 at 03:14
  • 1
    That is incorrect because if $s < t$ then $t = \max(t,s)$. Furthermore $X(t)-X(0)$ and $X(s)-X(t)$ are not independent in this case -- the increments overlap. – RRL Aug 01 '14 at 03:18
  • Thank you! The overlapping explanation is what I was looking for. – Ana M Aug 01 '14 at 03:23
  • You're welcome. – RRL Aug 01 '14 at 03:23
0

The accepted answer is 100% correct, but to someone less witty, it might not be immediately obvious that the easiest way to solve the problem is by writing down the equality $0 = E[X(t)(X(s)-X(t))]$ and continuing to work with this expression.

I think it's more intuitive to start with the definition of covariance and it's also worth to emphasize why the "independence of increments property" is useful inside the expectation.

$$Cov(X,Y):=\mathbb{E}\left[XY\right]-\mathbb{E}\left[X\right]\mathbb{E}\left[Y\right]$$

Autocovariance in the context of Brownian motion is nothing more that $Cov(W_s,W_t)$, therefore:

$$Cov(W_s,W_t)=\mathbb{E}\left[W_sW_t\right]-\mathbb{E}\left[W_s\right]\mathbb{E}\left[W_t\right]=\mathbb{E}\left[W_sW_t\right]$$

Above, we trivially used the property that the expected value of Brownian motion is zero.

Now assume that $t>s$ and write $t$ as $t=s+h$. Then:

$$\mathbb{E}\left[W_sW_t\right]=\mathbb{E}\left[W(s)W(s+h)\right]$$

Now in my view the critical step is to realize that although $W(s+h)$ does not equal $W(s)+W(h)$ (i.e. $W(s)$ and $W(h)$ in this context are two independent random processes, whilst $W(s+h)$ is a single Brownian motion), we have that $W(s+h)$ is equal in distribution to $W(s)+W(h)$: since we are interested in the expression inside the expectation, we can use this equality in distribution to replace one expression with the other, writing:

$$Cov(W_s,W_t)=\mathbb{E}\left[W(s)(W(s)+W(h))\right]=\mathbb{E}\left[W(s)^2+W(s)W(h)\right]$$

And using the linearity of the expectation operator, we finally get:

$$Cov(W_s,W_t)=\mathbb{E}\left[W(s)^2\right]+\mathbb{E}\left[W(s)\right]\mathbb{E}\left[W(h)\right]=\mathbb{E}\left[W(s)^2\right]=s$$

Therefore concluding that indeed $Cov(W_s,W_t)=min(s,t)$.

The above "lengthy" reasoning approach might be tedious and not the most elegant to a pure talented mathematician, but for the "average Joe" (i.e. me) it is easier to understand and replicate.

Jan Stuller
  • 1,279