2

While reading the following : https://almostsuremath.com/2010/01/18/quadratic-variations-and-integration-by-parts/#scn_ibp_eq7

there is a statement I don't know why it holds exactly. So this is part of the proof on the existence of a quadratic variation given a semimartingale $X$. Here $P^k$ is a sequence of stochastic partitions whose mesh tends to $0$, i.e. $$P = \{0=\tau_0 \le \tau_1 \le \cdots \uparrow \infty\},$$and $$|P^t | = \max_n (\tau_n \wedge t - \tau_{n-1}\wedge t).$$

In the process $\alpha_s^P$ defined below, which is in fact just $X_{s-}- X_{\tau_{n-1}}$ for $s \in (\tau_{n-1},\tau_n]$, why do we get $$|\alpha^{P_k}| \le 2|X_{-}|?$$ I cannot quite figure out how we can bound $|X_{\tau_{n-1}}|$ by $X_{-}$ for all $k$, which is what we need to use dominated convergence. $X_{\tau_{n-1}}$ is close to $X_{-}$ and the difference can be bounded by $2|X_{-}|$ when the mesh is near $0$, but that holds for large $k$, depending on each $\omega$ separately, so I don't see how a uniform bound can be found here. I would greatly appreciate any help.

enter image description here

  • Well, if George doesn't respond to the queries made on the blog, I guess it will be our duty to fill in those wonderful notes. I answered a previous question of yours, as you might remember. The need of the hour was for me to fill in details. On this occasion, I will perhaps do this once again. It is good that these subtle questions are coming in, because GL's blog deserves to be frequented by more people. – Sarvesh Ravichandran Iyer Jan 29 '22 at 14:22
  • 1
    @SarveshRavichandranIyer Just saw this comment Sarvesh. Yes George's notes are gold. I thought this argument was something very simple that I was just missing because the inequality is just stated in the proof but I guess it is not that immediate. Really appreciate your input and attention to details. – nomadicmathematician Jan 29 '22 at 17:49
  • I'm not able to get around the problem with ease, actually. It's quite obvious that $\alpha^P\leq 2|X_-|$ isn't true , for the reasons that you mention. Then one has to find a dominating process for the process $\sum_{n=1}^\infty X_{\tau_{n-1}}1_{\tau_{n-1} < s \leq \tau_n}$ for any partition $P$ (or countable collection $P^k$), and combining it with the integrable $X_{s-}$ gives a dominating process for $\alpha^P$. The only process I can think of is something involving the variation process, that covers the maximum variation over partitions. However, is that integrable? – Sarvesh Ravichandran Iyer Jan 30 '22 at 20:16
  • @SarveshRavichandranIyer I do not know what kind of variation process you have in mind. I thought we would need to consider the sum of the differences $(X_{s-})$ and $X_{\tau_{n-1}}$ jointly and find a bound by using left hand limit approximation, but I could not think of a uniform bound for it. – nomadicmathematician Jan 30 '22 at 20:57
  • 1
    I think the following can work, hopefully : $X$ is cadlag, and therefore the process $Y_t = \sup_{s<t} |X_s|$ is a locally bounded process , hence $X$-integrable and dominates the process $\sum_{n=1}^\infty X_{\tau_{n-1}} 1_{\tau_{n-1} < s \leq \tau_n}$. So $\alpha^{P_k}$ is dominated by $|X_-| + |Y|$, an integrable process, and we can prove that it goes to $0$ in probability, so dominated convergence in probability applies. At the end of the day, the bound $2|X_-|$ is definitely wrong, so that will address that part of the question. – Sarvesh Ravichandran Iyer Jan 31 '22 at 09:31
  • @SarveshRavichandranIyer Right this seems like a simple solution to the problem. $Y$ is locally bounded because it is caglad since it is just the $X^*_{-}$, which is the left hand limit of the running maximum of $X$ (which you showed is cadlag before) right? – nomadicmathematician Jan 31 '22 at 09:40
  • You are right, @nomadicmathematician. Upon seeing further posts, I found that here, in the proof of lemma $2$, the same argument seems to be used with $U$ in place of $X$. – Sarvesh Ravichandran Iyer Jan 31 '22 at 09:43
  • @SarveshRavichandranIyer Bravo. If you don't mind just typing this into an answer I would accept it thanks for clearing this up. I was definitely overthinking it. Please check my other bounty on George's post when you've got time and I've got a new one on the proof on the existence of solutions to SDEs https://math.stackexchange.com/questions/4369646/on-the-proof-of-the-existence-of-solutions-to-sde-via-step-function-approximatio – nomadicmathematician Jan 31 '22 at 09:49
  • 1
    Thanks, I'll do this. I've been writing a more detailed answer so that I can cover the entire lemma in detail, so I'll post that as well but I'll make sure to go over this part more thoroughly, and I'll look through the SDE question when I can. – Sarvesh Ravichandran Iyer Jan 31 '22 at 10:04
  • @SarveshRavichandranIyer Have you checked my comment which I think may be what you had in mind here? – nomadicmathematician Feb 02 '22 at 13:44
  • Yes, I saw it. I haven't had time to respond to it yet, but I'll make sure to do that. Thanks for the reminder. – Sarvesh Ravichandran Iyer Feb 02 '22 at 13:55
  • @SarveshRavichandranIyer I would greatly appreciate if you could take a look at this one before it expires https://math.stackexchange.com/questions/4361581/approximation-of-jump-times-of-cadlag-adapted-processes-by-stopping-times-runnin – nomadicmathematician Feb 03 '22 at 09:04
  • I couldn't pay enough attention to that one, unfortunately, and may not be the best person to answer it. Furthermore, at this moment I am somewhat busy, which is why I put off the improvement of the post below till the weekend. In that period I also plan to see the SDE question which you linked to me some time ago, and whether I can answer it or not. I'm sorry about the delay but thanks for the reminder. – Sarvesh Ravichandran Iyer Feb 03 '22 at 10:21
  • @SarveshRavichandranIyer Nothing to be sorry about. Appreciate the attention. – nomadicmathematician Feb 03 '22 at 10:30

1 Answers1

2

George has defined the processes $[X] ,[X,Y]$ for semimartingales $X,Y$ to be such that $[X] = [X,X]$, and the a.s. integration-by-parts formula $$ XY = X_0Y_0 + \int X_- dY + \int Y_- dX + [X,Y] $$ holds. Then he proceeds to prove Theorem 1 , which states that $[X,Y]^{P_n} \to [X,Y]$ in the semimartingale topology as $n \to \infty$ where $P_n$ is a sequence of stochastic partitions with mesh size going to zero in probability. The definition of $[X,Y]^{P_n}$ is as per equation $(4)$ of the text.

Therefore, it is wise to go through a proof of Theorem 1 with the details.


We begin with a stochastic partition $P = 0 \leq \tau_0 \leq \tau_1 \leq \ldots \uparrow \infty$ and an arbitrary $t>0$. Note that $4[X,Y] = [X+Y]+[X-Y]$, courtesy the polarization identity that follows from the semimartingale definition, therefore it's sufficient to prove that for a semimartingale $X$ we have $[X]^{P_k} \to [X]$ in the semimartingale topology, where $P_k$ is a sequence of partitions whose mesh converges to zero in probability.

The idea of the proof is to use stochastic calculus and express $[X]_t^P$ as a stochastic integral of an appropriate random process. That process can be understood when we write down the definition of $[X]_t^P$ $$ [X]_t^P = \sum_{n=1}^\infty (X_{\tau_n \wedge t} - X_{\tau_{n-1} \wedge t})^2 $$ If we let $\delta X_n = X_{\tau_n \wedge t} - X_{\tau_{n-1} \wedge t}$ then the above expression involves $(\delta X_n)^2$ so we duly take the square ; not according to $(a-b)^2 = a^2+b^2-2ab$, but rather $(a-b)^2 = a^2-b^2-2b(a-b)$ :$$ (\delta X_n)^2 = X_{\tau_n \wedge t}^2 - X_{\tau_{n-1} \wedge t}^2 - 2 X_{\tau_{n-1} \wedge t}(\delta X_n) \tag{1} $$

The idea behind this decomposition is that we can bring in a stochastic integral on the RHS, using the difference term $\delta X_n$. The first thing is to deal with the other two terms, though. We see an opportunity to bring in $[X]$ here, and using its definition:$$ X_t^2 = [X]_t^2 +X_0^2 + 2\int_{0}^t X_{s-}dX_s \tag{D} $$ we use $(D)$ with $t = \tau_n \wedge t, \tau_{n-1} \wedge t$ and subtract those expressions to get $$ X_{\tau_n \wedge t}^2 - X_{\tau_{n-1} \wedge t}^2 = [X]_{\tau_n \wedge t}^2 - [X]^2_{\tau_{n-1} \wedge t} +2\int_0^t X_{s-} 1_{\tau_{n-1} < s \leq \tau_n} dX_s \tag{Q} $$

Now, we expect $2 X_{\tau_n \wedge t} X_{\tau_{n-1} \wedge t}$ to be the stochastic integral of some process with respect to $X$. A simple look at the definition for elementary processes, tells you that the process $Z_t = 2X_{\tau_{n-1}} 1_{\tau_{n-1} < t \leq \tau_n}$ is elementary and has the property $\int_0^t Z_sds = 2 X_{\tau_{n-1} \wedge t}(\delta X_n)$ a.s. and combining this and $(Q)$ into $(1)$ gives $$ (\delta X_n)^2= [X]_{\tau_n \wedge t}^2 - [X]^2_{\tau_{n-1} \wedge t} +2\int_0^t (X_{s-} - X_{\tau_{n-1}}) 1_{\tau_{n-1} < s \leq \tau_n} dX_s\tag{$E_n$} $$


Now, we consider the equality $\sum_{n=1}^\infty E_n$ i.e. that obtained by adding the above equation from $n=1$ to $\infty$ (we'll justify shortly that this is a well defined equality of good stochastic processes). On the LHS of this equality is obtained the quantity $\sum_{n=1}^\infty (\delta X_n)^2$, which we know equals $[X]_t^P$ by definition. We also know that $\sum_{n=1}^\infty ([X]_{\tau_n \wedge t}^2 - [X]^2_{\tau_{n-1} \wedge t}) = [X]_t$ because $\tau_n \uparrow \infty$ a.s. so the series consists of finite terms and stops at $t$ a.s.

What of the stochastic integral? Well, what we know for a fact is that IF we could interchange the limit and the integral, then we can express that entire term as a stochastic integral involving a single process , and we have an identity to work with.

For that, we must use a fact : if we have a locally bounded predictable process, then it is integrable(Lemma 5). As a corollary, the left-limit process of an adapted cadlag process is integrable, because the left limit process of a cadlag process is caglad (it's left continuous with right limits) and all caglad processes are predictable and locally bounded.

We remark that George's proof at this point is perhaps wrong , and suggest a more rigorous fix. Consider the process given by $X^*_t = \sup_{s \leq t} |X_{s}|$. This is the supremum process of the cadlag process $X$, hence it is cadlag itself, and therefore the left-limit process $X^*_{t-}$ is $X$-integrable by the fact we've stated. However, note that $X^*_{t-} = \sup_{s < t} |X_s|$, therefore we have a.s. the inequality $$ \left|\sum_{m=1}^n (X_{s-} - X_{\tau_{m-1}}) 1_{\tau_{m-1} < s \leq \tau_m} \right| \leq |X_{t-}|+|X^*_{t-}| $$ for all partitions $P$ and all $n$ , where the RHS is an $X$-integrable process. Now, all we need to do is invoke dominated convergence. The first point is that the finite sums $$ \sum_{m=1}^n (X_{s-} - X_{\tau_{m-1}}) 1_{\tau_{m-1} < s \leq \tau_m} \to \sum_{m=1}^\infty (X_{s-} - X_{\tau_{m-1}}) 1_{\tau_{m-1} < s \leq \tau_m} = \alpha_s^P $$ in probability as $n \to \infty$, for fixed $s$. Indeed, the probability that the difference between the two is non-zero is bounded by the probability that $\tau_{n-1} \leq s$, but that probability goes to $0$ as $n \to \infty$ because $\tau_i \to \infty$ a.s.. By dominated convergence, it follows that $$ \int_{0}^t \sum_{m=1}^n (X_{s-} - X_{\tau_{m-1}}) 1_{\tau_{m-1} < s \leq \tau_m}dX_s \to \int_0^t\sum_{m=1}^\infty (X_{s-} - X_{\tau_{m-1}}) 1_{\tau_{m-1} < s \leq \tau_m} dX_s = \int_0^t \alpha_s^P dX_s $$

which therefore leads to the identity $$ [X]_t^P = [X]_t + \int_0^t \alpha_s^P dX_s $$


Now, let $P_k$ be a sequence of partitions whose mesh sizes plummet to $0$ almost surely. We claim that $\alpha_s^{P_k} \to 0$ almost surely (the second point?). To prove this, we'll prove that whenever $P^k(\omega) \to 0$ then $\alpha_s^{P_k}(\omega) \to 0$ (for all $s$).

Consider a sample space element $\omega$ like the above. If the mesh size of $P^k$ is $|P^k|$ then it is clear that $|\alpha^{P_k}_s| \leq \inf_{\epsilon<|P^k|}|X_{s-} - X_{s-\epsilon}|$ for all $s$. In particular, fixing an $s$, for every $\delta>0$ there is a $\delta'$ with $\delta'>s'>0 \implies |X_{s-}-X{s-s'}|<\delta$, and for this $\delta'$ there is a $K$ such that $k>K$ implies $|P^k| < \delta'$. Combining these facts , it is clear that $\alpha^{P_k}_s \to 0$, and $s$ was arbitrary.

Therefore, $\alpha^{P_k}_s \to 0$ a.s. (and therefore in probability). Now, $\alpha_s^{P_k}$ is the limit, in probability , of the sequence of partial sums defining it, and we've already seen that those partial sums are bounded above by an integrable process. Since convergence in probability implies convergence a.s. along a subsequence, it follows that $|\alpha_s^{P_k}| \leq |X_{s-}| + |X^*_{s-}|$ a.s. and by dominated convergence, $\int_0^t\alpha^{P_k}_sdX_s \to 0$ for all $t$ i.e. $[X]^P_t = [X]_t$ for all $t$.

Note that dominated convergence is available in the ucp and semimartingale topologies, therefore the convergence above is also applicable in either notion, although the semimartingale convergence is stronger.


To prove this when $|P_k| \to 0$ in probability, we use the fact that we can extract a subsequence of $P_k$ which goes to $0$ a.s. Therefore ,we get the following result : for any subsequence ${P_{k_n}}$ we can find a further subsequence satisfying $[X]^{P_{k_{n_l}}} \to [X]$. Thus, every subsequence of $[X]^{P_k}$ has a subsequence that converges to $[X]$ in the semi-martingale topology : this is well-known to be equivalent to convergence in the same topology. The proof is quite similar to the following result for metric spaces : if $(X,d)$ is a metric space then a sequence is convergent to $x$ if and only if every subsequence has a further subsequence converging to $x$.

  • Thanks for the phenomenal explanations in detail. There are just some typos in (D), (Q), (E) where there shouldn't be a square in the brackets. Also, why is $|\alpha_s^{P_k}|\le \inf_{\epsilon < |P^k|}|X_{s-}-X_{s-\epsilon}|?$ And below it should be $X_{s-s'}$? I think there may be some mistake in the way the infimum is set up here. – nomadicmathematician Feb 01 '22 at 10:14
  • @nomadicmathematician Thanks, I'll check those errors and get back to you. I tried to be careful at the end but I'll make sure I have it wrapped up more carefully. – Sarvesh Ravichandran Iyer Feb 01 '22 at 10:15
  • Also there is a fact that I glanced over before but as you mention above $(E_n)$, where the process $Z_t$ is elementary, George actually defines elementary processes for fixed times $t_k, s_k$. But he doesn't explain why it is still an elementary process when we replace these by stopping times. Could you explain this point as well? – nomadicmathematician Feb 01 '22 at 10:16
  • After thinking about this for a while, I think you meant $|\alpha_s^{P_k}|\le \sup_{\epsilon < |P^k|}|X_s - X_{s-\epsilon}|$, since $|s-\tau_{n-1}| < |P^k|$, so I think it should be sup instead of inf. Then, as you argue, for $k>K$, $|P^k|<\delta'$, where for all $s'\in(0,\delta')$, we have $|X_s - X_{s-s'}|<\delta$, thus since $\epsilon<|P^k|$ is a smaller set than $\epsilon<\delta'$ for $k>K$, we have $\sup_{\epsilon < |P^k|}|X_s - X_{s-\epsilon}|\le \sup_{\epsilon < \delta'}|X_s - X_{s-\epsilon}|$ and this is $\le \delta$ for $k>K$. $|\alpha_s^{P_k}|\le \delta$ for $k>K$. Is this correct? – nomadicmathematician Feb 01 '22 at 10:55