5

Consider the stochastic integral of a process $H$ with respect to the local martingale $M$: $$ (H\bullet M)_t = \int_{[0,t]} H_s\,\mathrm d M_s. $$

We know that when $H$ is predictable and sufficiently integrable, then $H\bullet M$ is a local martingale. Moreover, it is also well-known that when $H$ is not predictable, then $H\bullet M$ need not be a local martingale. This answer gives a nice example demonstrating this fact. On the other hand, when $M$ also happens to be continuous, then we are able to also define $H\bullet M$ for progressive processes $H$ (cf. Karatzas and Shreve).

This naturally makes it important to identify where exactly in the construction of the stochastic integral predictability of the integrand is important. Unfortunately, I can't see where predictability plays a role here. Can anyone help clarify this?


Context and Background

A typical construction of the stochastic integral is to first define the integral for simple predictable processes. It is straightforward to show that when $H$ is simple predictable, then $H\bullet M$ is a local martingale. Standard arguments also imply that any predictable process is the limit of simple predictable ones.

Then, for a general predictable process $H$ (again, assuming sufficient integrability), we fix a sequence of simple predictable processes $\{ H^n\}$ with $H^n \to H$, and define the integral $H\bullet M=\lim H^n \bullet M$. (One can show that $H \bullet M$ does not depend on our choice of approximating sequence and is thus well-defined.) $H\bullet M$ inherits the (local) martingale property from its approximating sequence.

It seems to me that this procedure works just as well even though $H$ were not necessarily predictable, but simply a càdlàg adapted process, even for general (i.e. not necessarily continuous) local martingales.

What am I missing?

I know I am glossing over quite a few details here, since I don't want to make this post much longer than necessary. I can fill in the details as needed. For reference, the construction I have in mind is the one in Cohen and Elliott (2015).

  • If your process is not predictable you lose local martingale property and the Lebesgue dominated convergence (Protter's Book explain this at length or also G. Lowther Blog almost sure). You can look at the Stratanovitch integral to get a glimpse of what happens. If You get a "drift term" that makes you lose the martingale property (when you transform it back into an Ito framework). The Skorokhod integral looks in the future, defined by some kind "duality method", you lose even more the integral is no more an adapted process but you gain some formulas (Clark/Ocone / IPP etc...) – TheBridge Sep 30 '19 at 07:52
  • @TheBridge Thanks for your comment. I understand that there are things you lose when the integrand is not predictable (though some of the things you mention in your comment are new to me; I will check them out, thank you). I was hoping to have a better understanding of why you lose them. If you have anything to post as an answer in that direction, I’d still appreciate it, even if it’s not a dissection of the proof I refer to in my question. – Theoretical Economist Oct 01 '19 at 09:28
  • Ask yourself why do I lose martingale property, Protter in his book also shows why naïve stochasitc integration is "impossible" in the end of the first chapter with a classy argument, maybe you will get your why then.Regards. – TheBridge Oct 01 '19 at 13:54
  • @TheBridge Thanks, I’m familiar with the use of the Uniform Boundedness Principle to show why we can’t just define the stochastic integral the same way we would a Lebesgue integral. I’ll have another look at the argument. – Theoretical Economist Oct 01 '19 at 13:58
  • @TheBridge I thought about this a little, and have found my own answer which I posted below. I'd appreciate it if you had a look. – Theoretical Economist Oct 17 '19 at 05:45

2 Answers2

3

It might be helpful to take a look at the discrete martingale transform.

Given a martingale $(M_k)_{k \in \mathbb{N}_0}$ with respect to a filtration $(\mathcal{F}_k)_{k \in \mathbb{N}_0}$ and a process $(C_k)_{k \in \mathbb{N}_0}$ define the discrete martingale transform by

$$(C \bullet M)_n := \sum_{j=1}^n C_j (M_{j}-M_{j-1}), \qquad (C \bullet M)_0 := 0.$$

If $(C_k)_{k \in \mathbb{N}_0}$ is predictable, i.e. $C_k$ is $\mathcal{F}_{k-1}$-measurable for each $k$, then $C \bullet M$ is a martingale (...assuming that everything is nicely integrable). This corresponds, essentially, to the fact that the stochastic integral of a predictable simple process w.r.t to a (time-continuous) martingale is a martingale (..again, provided that everything is nicely integrable). If the process is not predictable, then the martingale property fails, in general, to hold. Since martingales have constant expectation, the condition

$$0 = \sum_{j=1}^n \mathbb{E}(C_j (M_j-M_{j-1}))$$

is necessary for $C \bullet M$ being a martingale. Since this needs to hold for all $n$, we actually need

$$0 = \mathbb{E}(C_n (M_n-M_{n-1})), \qquad n \in \mathbb{N}.$$

If $C$ is not predictable there is no reason why this should be true. The difference $M_n-M_{n-1}$ has expectation zero but since we are multiplying it with something which can be correlated with $M$, the product will, in general, fail to have zero expectation. For instance, we could choose $C_n := \frac{1}{2} (M_{n-1}+M_n)$ and see that

$$\mathbb{E}(C_n (M_n-M_{n-1}) = \frac{1}{2}( \mathbb{E}(M_n^2)-\mathbb{E}(M_{n-1}^2)) = \frac{1}{2} (\mathbb{E}\langle M \rangle_n-\mathbb{E}\langle M \rangle_{n-1})$$

where $\langle \cdot \rangle$ denotes the quadratic variation. The expression on the right-hand side is, in general, strictly positive. For instance, if $M$ is a "discretized" Brownian motion, then it equals $1/2$. This is exactly the phenomenon which we observe while studying the the Stratonovich integral (see the comment by @TheBridge).

saz
  • 123,507
  • Dear @saz, thank you for the answer. As with your other answers, it was quite enlightening. I hope you don't mind that I take a few days to mull this over, but I expect I will be accepting your answer when that period is up. – Theoretical Economist Oct 03 '19 at 13:39
  • @TheoreticalEconomist I'm glad to hear that you found the answer helpful. – saz Oct 03 '19 at 17:47
  • Apologies for taking longer than I thought to get around to accepting this. Your answer helped me realise (what I think is) an important point regarding the definition of the stochastic integral of a simple process. I posted my thoughts in an answer below. I'd appreciate it if you had a look, and if you could let me know in case my reasoning is actually mistaken. – Theoretical Economist Oct 17 '19 at 05:43
  • @TheoreticalEconomist To be honest, I do not really see how your answer fits to mine. Why do you think that the predictability of $H$ plays no role for the martingale property of $(H \bullet M)$...? In my answer, I'm saying essentially the exakt converse, don't I? Take for instance $M$= Brownian motion and $$H_{t_i} = \frac{M_{t_{i+1}}+M_{t_i}}{2}$$ Why do you belive $(H \bullet M)$ to be a martingale? – saz Oct 19 '19 at 18:58
  • Thanks for reading my answer; I suppose I need to think about this some more then. As for your question: I don't. Your $H_{t_i}$ is not $\mathcal F_{t_i}$-measurable, so we can't define a simple process this way. – Theoretical Economist Oct 19 '19 at 19:01
  • To be clear, however, I do think predictability does play a role in the fact that $H\bullet M$ is a martingale. – Theoretical Economist Oct 19 '19 at 19:05
  • @TheoreticalEconomist Well, I see, perhaps I just got your answer wrong. Seems that we are having slightly different definitions in mind (... for me the general definition of a "simple process" does not include adaptedness). – saz Oct 19 '19 at 19:08
  • I see. I define my class of simple processes in my answer, if you have the time to look at it again. If not, it's fine. Thanks for the help thus far! – Theoretical Economist Oct 19 '19 at 19:12
  • Actually, now that you point it out, it seems the adaptedness point is crucial. The proof I have for the fact that $H \bullet M$ is a martingale when $H$ is simple does use the fact that $H$ is adapted. – Theoretical Economist Oct 19 '19 at 19:17
  • @TheoreticalEconomist Yes, the adaptedness of $H$ is certainly crucial (... and it is not mentioned in your answer, that's why I was somewhat confused). – saz Oct 19 '19 at 19:19
  • It's there, where I say $H^i$ is $\mathcal F_{t_i}$ measurable. I guess it's easy to miss, apologies for that! – Theoretical Economist Oct 19 '19 at 19:24
  • Oh, I see what you mean. My mistake! – Theoretical Economist Oct 19 '19 at 19:25
1

saz's excellent answer helped me realise the following point. While it does seem to be the case that the predictability of the integrand does not play an important role in the proof that the stochastic integral of a simple predictable process is a martingale, the predictability of the integrand is nevertheless important in the definition of the stochastic integral of a simple process. It is this definition that guarantees the martingale property. I will try to illustrate this assertion below.

Denote the space of bounded, left-continuous, predictable processes by $\Lambda$. That is, $H\in\Lambda$ whenever there is a finite sequence of stopping times $0=t_0 <t_1<t_2<\cdots<t_n<t_{n+1}=\infty$ and a family $\{H^i\}_{i=1}^n$ of bounded random variables such that $H^i$ is $\mathcal F_{t_i}$-measurable for each $i$, and $$ H_0 =H^0 \quad \text{and} \quad H_t = H^i \; \text{for} \; t\in(t_i,t_{i+1}]. $$

For a square-integrable martingale $M$ (the local martingale case is similar), and $H \in \Lambda$, we have that the stochastic integral $H\bullet M$ is defined as $$ (H\bullet M)_t = H_0M_0 + \sum_i H_{t_i} (M_{t_{i+1}\wedge t}-M_{t_i\wedge t}). \tag{$\star$}\label{1} $$

It is easy to verify that the expression on the right-hand side is also a square-integrable martingale. This argument makes no use of the fact that $H$ is predictable. However, the fact that $H$ is predictable is important for the definition \eqref{1}. If $H$ were simple but not necessarily predictable, then \eqref{1} would no longer be the appropriate definition of the stochastic integral of $H$.

We want our stochastic integral to behave like a classical integral whenever a classical definition is applicable. This means that if $H$ were simple, right-continuous, and adapted, instead of predictable (that is, if $H_t = H^i$ for $t\in[t_i,t_{i+1})$), then the correct definition of $H\bullet M$, by analogy with Stieltjes integration, would be $$ (H\bullet M)_t = H_0M_0 + \sum_i H_{t_i} (M_{t_{i+1}\wedge t-}-M_{t_i\wedge t-}), $$ where $X_{t-}= \lim_{s\uparrow t} X_s$. In this case, we can no longer guarantee that $H \bullet M$ is a martingale, unless, for example, $M$ happened to be continuous, so that $M_t = M_{t-}$, and we can apply the proof of the martingale property as in \eqref{1}. (I believe this also is indicative of why we can actually define the stochastic integral for a much larger class of integrands when the integrator is continuous.)