7

Let $N_t$ be a Poisson process with rate $\lambda$ and let $M_t=N_t-\lambda t$. I am then trying to find $$ \mathbb{E}\left[\int_0^tN_s\,\mathrm{d}M_s\right]. $$ I have tried applying the definition of an Îto integral to find that \begin{align*} \mathbb{E}\left[\int_0^tN_s\,\mathrm{d}M_s\right]&=\mathbb{E}\left[\sum_{j\geq0}N_{t_j}(M_{t_{j+1}}-M_{t_j})\right]\\ &=\sum_{j\geq0}\mathbb{E}[N_{t_j}]\mathbb{E}[N_{t_{j+1}}-N_{t_j}]-\lambda(t_{j+1}-t_j)\mathbb{E}[N_{t_j}]\\ &=\sum_{j\geq0}\lambda^2t_j(t_{j+1}-t_j)-\sum_{j\geq0}\lambda^2(t_{j+1}-t_j)t_j\\ &=\frac{1}{2}\lambda^2t^2-\frac{1}{2}\lambda^2t^2=0. \end{align*} However, from the way the next question is posed I feel this must not be right. The next question asks why $\int_0^tN_s\,\mathrm{d}M_s$ cannot be a martingale, which would be if its expectation depended on $t$ for example.

An even further question replaces $N_s$ with $N_{s^-}$, which I feel should result in the result above.

Hence, I feel I am missing an important step, can I not apply independence of increments in this way?

I have found this thread and viewed the mentioned work, but it didn't not clear things up for me.

Kurt G.
  • 17,136
Renze
  • 119

2 Answers2

5

It is not true that the expectation of $$ \textstyle \int_0^t N_{s}\,dM_s $$ is zero. The reason for this is very subtle and I will try to make it clear by an elaborate answer.

First I show:

  • The integral $\int_0^t N_{s-}\,dM_s$ is a martingale, and its expectation is zero.

  • The integral $\int_0^t N_{s}\,dM_s$ is not a martingale (not even a local one), and its expectation is $\lambda t\,.$

Proof. Since $N_s$ is increasing and changes only by jumps the first integral is pathwise defined as $$\tag{1} \textstyle\sum\limits_{s\le t} N_{s-}\,\Delta N_s-\lambda\int_0^tN_{s-}\,ds\,. $$ The first term equals $\sum\limits_{n=1}^{N_t}(n-1)$ which is increasing in $t$ and equals $\frac{1}{2}(N_t-1)N_t\,.$ The expectation of this is $\frac{1}{2}\mathbb E[N_t^2-N_t]=\frac{1}{2}(\lambda t+\lambda^2t^2-\lambda t)=\frac{1}{2}\lambda^2 t^2\,.$ Therefore, \begin{align}\tag{2} \textstyle\mathbb E\Big[\sum\limits_{s\le t} N_{s-}\,\Delta N_s\Big]= \frac{1}{2}\lambda^2 t^2\,. \end{align} Next, \begin{align}\tag{3} \mathbb E\Big[\textstyle\int_0^tN_{s-}\,ds\Big]=\int_0^t\lambda s\,ds=\frac{1}{2}\lambda t^2\,. \end{align} It follows that \begin{align} \textstyle\mathbb E\Big[\sup\limits_{t\le T}\Big|\int_0^t N_{s-}\,dM_s\Big|\Big]&\le\textstyle\sup\limits_{t\le T}\mathbb E\Big[\sum\limits_{s\le t} N_{s-}\,\Delta N_s\Big]+ \sup\limits_{t\le T}\mathbb E\Big[\lambda\int_0^tN_{s-}\,ds\Big]\\[3mm] &=\lambda^2T^2<\infty.\tag{4} \end{align} By [1] Chap. III, Thm. 29 the stochastic integral $\int_0^t N_{s-}\,dM_s$ is a local martingale and by (4) and [1] Chap. I, Thm. 51 it is a true martingale. The fact that it must have zero expectation is clear from its value at $t=0\,.$

The second statement can be shown by contradiction. If both stochastic integrals are martingales then their difference is a martingale. That difference is $$\tag{5} \textstyle\int_0^t N_s-N_{s-}\,dM_s=\textstyle\int_0^t \Delta N_s\,dM_s\,. $$ Since $N_t$ changes only by jumps of size one, $\Delta N_s=N_s-N_{s-}\in\{0,1\}\,,$ we have \begin{align}\tag{6} &\textstyle\int_0^t \Delta N_s\,dM_s =\sum\limits_{s\le t}\Delta N_s\,\Delta N_s-\lambda\int_0^t\Delta N_s\,ds\,. \end{align} The last term is zero because $N$ has only finitely many jumps in the interval $[0,t]\,.$ The first term is the sum of the jumps until $t\,,$ in other words, it is $N_t$ but that is not a local martingale. We have a contradiction. The expectation $\mathbb E[\textstyle\int_0^t N_s\,dM_s]=\lambda t$ follows from what we have just shown: $$\tag{7} \textstyle\int_0^t N_s\,dM_s=\underbrace{\int_0^tN_{s-}\,dM_s}_{\mathbb E[\,.\,]=0}+\underbrace{N_t}_{\mathbb E[\,.\,]=\lambda t}\,. $$ $$\tag*{$\Box$} \quad $$ Remarks.

  • The second integral is a prime example of a well defined stochastic integral that does not need a predictable integrand. Since $N_s$ is increasing and changes only by jumps this integral is simply pathwise defined as $$\tag{8} \textstyle\sum\limits_{s\le t} N_{s}\,\Delta N_s-\lambda\int_0^tN_s\,ds\,. $$

  • At first glance the proof in OP that $\int_0^t N_s\,dM_s$ has expectation zero seems convincing but the fact that $N_s$ is right continuous and not left continuous means that it is not predictable and therefore this integral cannot be written as the limit of Riemann-Stieltjes sums.

  • We have shown in the proof above that $$\tag{9} \textstyle\int_0^t N_{s}\,dM_s-\int_0^t N_{s-}\,dM_s=[N,N]_t=N_t $$ holds and that $$\tag{10} \textstyle\int_0^t N_{s}\,dM_s-N_t=\int_0^t(N_s-1)\,dN_s-\lambda\int_0^tN_s\,ds $$ is a martingale.

  • Also, since $[N,N]_t=[N,M]_t$ it follows from the integration-by-parts formula $$\tag{11} \textstyle N_tM_t=\int_0^tN_{s-}\,dM_s+\int_0^tM_{s-}\,dN_s+[N,M]_t $$ that $$\tag{12} \textstyle N_tM_t=\int_0^tN_s\,dM_s+\int_0^tM_{s-}\,dN_s $$ holds.

  • The relationships (9) and (12) have an analogy in the relationship between the Ito and the Stratonovich integral.

[1] P.E. Protter, Stochastic Integration and Differential Equations. 2nd ed.

Kurt G.
  • 17,136
1

Maybe I'm a bit late, but... Since $M$ has bounded variation, for each $\omega \in \Omega$, $M$ is a legitimate integrator in the Riemann-Stieltjes sense. Therefore, we may write $$\int_0^t N_s dM_s = \lim_{n\to \infty} \sum_{i=1}^n N_{t_{i-1}} (M_{t_i} - M_{t_{i-1}})$$ Now, observe that $$\int_0^t N_s dM_s = \int_0^t M_s dM_s + \int_0^t \lambda s dM_s$$ We treat each integral on the RHS individually. $$ \sum_{i=1}^n \lambda t_{i-1} (M_{t_{i}} - M_{t_{i-1}}) = \lambda \Big( M_t \cdot t - \sum_{i=1}^nM_{t_i} ( t_i - t_{i-1}) \Big) \xrightarrow{n \to \infty} \lambda \Big( M_t \cdot t - \int_0^t M_s ds \Big) $$ $$ \sum_{i=1}^n M_{t_{i-1}} (M_{t_i} - M_{t_{i-1}}) = \frac{1}{2} \Big( M_t^2 - \sum_{i=1}^n (M_{t_i} - M_{t_{i-1}})^2 \Big) $$ The last equation leaves us wondering about the quadratic variation of the compensated Poisson process... If the world was perfect, we could say that $M_s$ has zero quadratic variation, since it has bounded variation. However, this only applies to functions that are continuous! And the compensated Poisson process is not continuous, so it most likely has non zero quadratic variation. Denote $M_{t_i} - M_{t_{i-1}}$ by $\Delta M_i$ and notice that \begin{align*} &\sum_{i=1}^n(M_{t_i} - M_{t_{i-1}})^2 = \sum_{i=1}^n\Delta M_i^2 = \sum_{i=1}^n(\Delta (N_i -\lambda t_{i-1} ))^2 = \sum_{i=1}^n(\Delta N_i - \Delta (\lambda t_{i}) )^2 = \\ &= \sum_{i=1}^n \Delta N_i^2 - 2 \sum_{i=1}^n \Delta N_i \Delta(\lambda t_{i}) + \sum_{i=1}^n\Delta (\lambda t_i)^2 \end{align*} Finally, observe that $\lambda t$ is a continuous function, hence $\sum_{i=1}^n \Delta (\lambda t_i), \sum_{i=1}^n \Delta N_i \Delta(\lambda t_{i}) \to 0$, and that $\Delta N_i \in \{0,1\}$ for a fine enough partition, thus $\sum_{i=1}^n \Delta N_i^2 = \sum_{i=1}^n \Delta N_i$. Therefore, $$ \lim_{n\to \infty} \sum_{i=1}^n (M_{t_i} - M_{t_{i-1}})^2 = \lim_{n\to \infty} \Delta N_i = N_t $$ All in all, we get that $$\int_0^t N_s dM_s = \frac{1}{2} (M_t^2 - N_t) + \lambda t M_t - \lambda \int_0^t M_s ds = \frac{1}{2} [N_t^2 - N_t - \lambda^2 t^2] - \lambda \int_0^t M_s ds$$ Hence, $$ \mathbb{E}\Big[ \int_0^t N_s dM_s \Big] = \mathbb{E}\Big[ Y_t \Big] = \frac{1}{2} \Big[ E[N_t^2] - E[N_t] \Big] - \lambda \int_0^t E[M_s] ds = 0$$ Oh no... The expectation is a constant, just like one would expect from a martingale. So what's the problem? The problem lies in the continuity of the integrand! We need to make the integrand left continuous, i.e. we need to consider the process $N(s^-)$. So, if we really want to be pedantic about it, the R-S integral asked in this question doesn't exist!

Oscar
  • 1,465
  • You are right that for general semi martingales we can define stochastic integrals only for predictable integrands which is mostly achieved by requirng them to be left continuous. Therefore, a priori, we expect $\int_0^tN_{s-},dM_s,.$ *However, the other integral $\int_0^tN_s,dM_s$ exists* as well because what we integrate here is very benign. Note that $\int_0^tN_s,dM_s=\sum_{s\le t}N_s,\Delta N_s-\lambda\int_0^tN_s,ds$ and that $N_s$ has only finite jumps in $[0,t]$ of size one. OP's question is interesting and I think deserves a better answer. – Kurt G. Mar 19 '23 at 20:01
  • Thanks for pointing that out! And very nice answer as well xD I can see what you mean... Very interesting question indeed. – Oscar Mar 21 '23 at 13:00
  • 1
    Let me add a few more insights that I got during the last few hours. – Kurt G. Mar 21 '23 at 13:01